首页 > 最新文献

Eurasip Journal on Image and Video Processing最新文献

英文 中文
A novel secured Euclidean space points algorithm for blind spatial image watermarking 一种新的空间图像盲水印安全欧几里德空间点算法
IF 2.4 4区 计算机科学 Pub Date : 2021-08-10 DOI: 10.1186/s13640-022-00590-w
Shaik Hedayath Basha, J. B
{"title":"A novel secured Euclidean space points algorithm for blind spatial image watermarking","authors":"Shaik Hedayath Basha, J. B","doi":"10.1186/s13640-022-00590-w","DOIUrl":"https://doi.org/10.1186/s13640-022-00590-w","url":null,"abstract":"","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2021-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47670983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Superresolution reconstruction method for ancient murals based on the stable enhanced generative adversarial network 基于稳定增强生成对抗性网络的古壁画超分辨率重建方法
IF 2.4 4区 计算机科学 Pub Date : 2021-07-30 DOI: 10.1186/s13640-021-00569-z
Jianfang Cao, Yiming Jia, Minmin Yan, Xiaodong Tian
{"title":"Superresolution reconstruction method for ancient murals based on the stable enhanced generative adversarial network","authors":"Jianfang Cao, Yiming Jia, Minmin Yan, Xiaodong Tian","doi":"10.1186/s13640-021-00569-z","DOIUrl":"https://doi.org/10.1186/s13640-021-00569-z","url":null,"abstract":"","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2021-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13640-021-00569-z","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45765790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Trademark infringement recognition assistance system based on human visual Gestalt psychology and trademark design 基于人类视觉格式塔心理与商标设计的商标侵权识别辅助系统
IF 2.4 4区 计算机科学 Pub Date : 2021-07-22 DOI: 10.1186/s13640-021-00566-2
Kuo-Ming Hung, Li-Ming Chen, Ting-Wen Chen
{"title":"Trademark infringement recognition assistance system based on human visual Gestalt psychology and trademark design","authors":"Kuo-Ming Hung, Li-Ming Chen, Ting-Wen Chen","doi":"10.1186/s13640-021-00566-2","DOIUrl":"https://doi.org/10.1186/s13640-021-00566-2","url":null,"abstract":"","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2021-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13640-021-00566-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42093370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pansharpening based on convolutional autoencoder and multi-scale guided filter 基于卷积自动编码器和多尺度引导滤波器的泛锐化
IF 2.4 4区 计算机科学 Pub Date : 2021-07-19 DOI: 10.1186/s13640-021-00565-3
Ahmad AL Smadi, Shuyuan Yang, Zhang Kai, Atif Mehmood, Min Wang, Ala Alsanabani
{"title":"Pansharpening based on convolutional autoencoder and multi-scale guided filter","authors":"Ahmad AL Smadi, Shuyuan Yang, Zhang Kai, Atif Mehmood, Min Wang, Ala Alsanabani","doi":"10.1186/s13640-021-00565-3","DOIUrl":"https://doi.org/10.1186/s13640-021-00565-3","url":null,"abstract":"","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13640-021-00565-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45010200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Robust hand gesture recognition using multiple shape-oriented visual cues 鲁棒手势识别使用多个形状导向的视觉线索
IF 2.4 4区 计算机科学 Pub Date : 2021-07-19 DOI: 10.1186/s13640-021-00567-1
Samy Bakheet, Ayoub Al-Hamadi

Robust vision-based hand pose estimation is highly sought but still remains a challenging task, due to its inherent difficulty partially caused by self-occlusion among hand fingers. In this paper, an innovative framework for real-time static hand gesture recognition is introduced, based on an optimized shape representation build from multiple shape cues. The framework incorporates a specific module for hand pose estimation based on depth map data, where the hand silhouette is first extracted from the extremely detailed and accurate depth map captured by a time-of-flight (ToF) depth sensor. A hybrid multi-modal descriptor that integrates multiple affine-invariant boundary-based and region-based features is created from the hand silhouette to obtain a reliable and representative description of individual gestures. Finally, an ensemble of one-vs.-all support vector machines (SVMs) is independently trained on each of these learned feature representations to perform gesture classification. When evaluated on a publicly available dataset incorporating a relatively large and diverse collection of egocentric hand gestures, the approach yields encouraging results that agree very favorably with those reported in the literature, while maintaining real-time operation.

基于视觉的手部姿态鲁棒估计备受关注,但由于其固有的困难,部分原因是手部手指之间的自咬合。本文提出了一种基于基于多个形状线索构建的优化形状表示的实时静态手势识别创新框架。该框架结合了一个基于深度图数据的手部姿势估计的特定模块,其中首先从飞行时间(ToF)深度传感器捕获的极其详细和准确的深度图中提取手部轮廓。从手部轮廓出发,结合多个仿射不变的边界特征和区域特征,建立了一个混合多模态描述符,以获得对单个手势的可靠和代表性描述。最后,一场一对一的表演。-所有支持向量机(svm)在每个学习到的特征表示上进行独立训练,以执行手势分类。当在包含相对较大且多样化的自我中心手势集合的公开可用数据集上进行评估时,该方法产生了令人鼓舞的结果,与文献中报道的结果非常一致,同时保持了实时操作。
{"title":"Robust hand gesture recognition using multiple shape-oriented visual cues","authors":"Samy Bakheet, Ayoub Al-Hamadi","doi":"10.1186/s13640-021-00567-1","DOIUrl":"https://doi.org/10.1186/s13640-021-00567-1","url":null,"abstract":"<p>Robust vision-based hand pose estimation is highly sought but still remains a challenging task, due to its inherent difficulty partially caused by self-occlusion among hand fingers. In this paper, an innovative framework for real-time static hand gesture recognition is introduced, based on an optimized shape representation build from multiple shape cues. The framework incorporates a specific module for hand pose estimation based on depth map data, where the hand silhouette is first extracted from the extremely detailed and accurate depth map captured by a time-of-flight (ToF) depth sensor. A hybrid multi-modal descriptor that integrates multiple affine-invariant boundary-based and region-based features is created from the hand silhouette to obtain a reliable and representative description of individual gestures. Finally, an ensemble of one-vs.-all support vector machines (SVMs) is independently trained on each of these learned feature representations to perform gesture classification. When evaluated on a publicly available dataset incorporating a relatively large and diverse collection of egocentric hand gestures, the approach yields encouraging results that agree very favorably with those reported in the literature, while maintaining real-time operation.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138518861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Fast ISP coding mode optimization algorithm based on CU texture complexity for VVC 基于CU纹理复杂度的VVC ISP编码模式快速优化算法
IF 2.4 4区 计算机科学 Pub Date : 2021-07-02 DOI: 10.1186/s13640-021-00564-4
Zhi Liu, Mengjun Dong, Xiaohan Guan, Mengmeng Zhang, Ruoyu Wang
{"title":"Fast ISP coding mode optimization algorithm based on CU texture complexity for VVC","authors":"Zhi Liu, Mengjun Dong, Xiaohan Guan, Mengmeng Zhang, Ruoyu Wang","doi":"10.1186/s13640-021-00564-4","DOIUrl":"https://doi.org/10.1186/s13640-021-00564-4","url":null,"abstract":"","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2021-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13640-021-00564-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48634090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spinal vertebrae localization and analysis on disproportionality in curvature using radiography—a comprehensive review 脊柱椎体定位及曲度不对称的影像学分析综述
IF 2.4 4区 计算机科学 Pub Date : 2021-06-29 DOI: 10.1186/s13640-021-00563-5
Joddat Fatima, Muhammad Usman Akram, Amina Jameel, Adeel Muzaffar Syed

In human anatomy, the central nervous system (CNS) acts as a significant processing hub. CNS is clinically divided into two major parts: the brain and the spinal cord. The spinal cord assists the overall communication network of the human anatomy through the brain. The mobility of body and the structure of the whole skeleton is also balanced with the help of the spinal bone, along with reflex control. According to the Global Burden of Disease 2010, worldwide, back pain issues are the leading cause of disability. The clinical specialists in the field estimate almost 80% of the population with experience of back issues. The segmentation of the vertebrae is considered a difficult procedure through imaging. The problem has been catered by different researchers using diverse hand-crafted features like Harris corner, template matching, active shape models, and Hough transform. Existing methods do not handle the illumination changes and shape-based variations. The low-contrast and unclear view of the vertebrae also makes it difficult to get good results. In recent times, convolutional nnural Network (CNN) has taken the research to the next level, producing high-accuracy results. Different architectures of CNN such as UNet, FCN, and ResNet have been used for segmentation and deformity analysis. The aim of this review article is to give a comprehensive overview of how different authors in different times have addressed these issues and proposed different mythologies for the localization and analysis of curvature deformity of the vertebrae in the spinal cord.

在人体解剖学中,中枢神经系统(CNS)是一个重要的处理中枢。中枢神经系统在临床上分为两个主要部分:脑和脊髓。脊髓通过大脑协助整个人体解剖学的交流网络。身体的活动和整个骨骼的结构也在脊柱骨的帮助下平衡,以及反射控制。根据《2010年全球疾病负担》,在世界范围内,背痛问题是导致残疾的主要原因。该领域的临床专家估计,几乎80%的人都有过背部问题的经历。椎骨的分割被认为是一个困难的过程,通过成像。不同的研究人员使用不同的手工特征,如哈里斯角、模板匹配、活动形状模型和霍夫变换来解决这个问题。现有的方法不能处理光照变化和基于形状的变化。椎骨的低对比度和不清晰的视图也使其难以获得良好的效果。近年来,卷积神经网络(CNN)将研究提升到了一个新的水平,产生了高精度的结果。CNN的不同架构如UNet、FCN和ResNet被用于分割和畸形分析。这篇综述文章的目的是全面概述不同作者在不同时期是如何解决这些问题的,并提出了不同的关于脊髓椎体弯曲畸形定位和分析的神话。
{"title":"Spinal vertebrae localization and analysis on disproportionality in curvature using radiography—a comprehensive review","authors":"Joddat Fatima, Muhammad Usman Akram, Amina Jameel, Adeel Muzaffar Syed","doi":"10.1186/s13640-021-00563-5","DOIUrl":"https://doi.org/10.1186/s13640-021-00563-5","url":null,"abstract":"<p>In human anatomy, the central nervous system (CNS) acts as a significant processing hub. CNS is clinically divided into two major parts: the brain and the spinal cord. The spinal cord assists the overall communication network of the human anatomy through the brain. The mobility of body and the structure of the whole skeleton is also balanced with the help of the spinal bone, along with reflex control. According to the Global Burden of Disease 2010, worldwide, back pain issues are the leading cause of disability. The clinical specialists in the field estimate almost 80% of the population with experience of back issues. The segmentation of the vertebrae is considered a difficult procedure through imaging. The problem has been catered by different researchers using diverse hand-crafted features like Harris corner, template matching, active shape models, and Hough transform. Existing methods do not handle the illumination changes and shape-based variations. The low-contrast and unclear view of the vertebrae also makes it difficult to get good results. In recent times, convolutional nnural Network (CNN) has taken the research to the next level, producing high-accuracy results. Different architectures of CNN such as UNet, FCN, and ResNet have been used for segmentation and deformity analysis. The aim of this review article is to give a comprehensive overview of how different authors in different times have addressed these issues and proposed different mythologies for the localization and analysis of curvature deformity of the vertebrae in the spinal cord.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2021-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138518858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exploiting prunability for person re-identification 利用可修剪性进行人员再识别
IF 2.4 4区 计算机科学 Pub Date : 2021-06-25 DOI: 10.1186/s13640-021-00562-6
Hugo Masson, Amran Bhuiyan, Le Thanh Nguyen-Meidine, Mehrsan Javan, P. Siva, Ismail Ben Ayed, Eric Granger
{"title":"Exploiting prunability for person re-identification","authors":"Hugo Masson, Amran Bhuiyan, Le Thanh Nguyen-Meidine, Mehrsan Javan, P. Siva, Ismail Ben Ayed, Eric Granger","doi":"10.1186/s13640-021-00562-6","DOIUrl":"https://doi.org/10.1186/s13640-021-00562-6","url":null,"abstract":"","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13640-021-00562-6","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42366982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real-time embedded object detection and tracking system in Zynq SoC Zynq SoC中的实时嵌入式目标检测与跟踪系统
IF 2.4 4区 计算机科学 Pub Date : 2021-06-16 DOI: 10.1186/s13640-021-00561-7
Qingbo Ji, Chong Dai, Changbo Hou, Xun Li

With the increasing application of computer vision technology in autonomous driving, robot, and other mobile devices, more and more attention has been paid to the implementation of target detection and tracking algorithms on embedded platforms. The real-time performance and robustness of algorithms are two hot research topics and challenges in this field. In order to solve the problems of poor real-time tracking performance of embedded systems using convolutional neural networks and low robustness of tracking algorithms for complex scenes, this paper proposes a fast and accurate real-time video detection and tracking algorithm suitable for embedded systems. The algorithm combines the object detection model of single-shot multibox detection in deep convolution networks and the kernel correlation filters tracking algorithm, what is more, it accelerates the single-shot multibox detection model using field-programmable gate arrays, which satisfies the real-time performance of the algorithm on the embedded platform. To solve the problem of model contamination after the kernel correlation filters algorithm fails to track in complex scenes, an improvement in the validity detection mechanism of tracking results is proposed that solves the problem of the traditional kernel correlation filters algorithm not being able to robustly track for a long time. In order to solve the problem that the missed rate of the single-shot multibox detection model is high under the conditions of motion blur or illumination variation, a strategy to reduce missed rate is proposed that effectively reduces the missed detection. The experimental results on the embedded platform show that the algorithm can achieve real-time tracking of the object in the video and can automatically reposition the object to continue tracking after the object tracking fails.

随着计算机视觉技术在自动驾驶、机器人等移动设备上的应用越来越多,目标检测与跟踪算法在嵌入式平台上的实现越来越受到重视。算法的实时性和鲁棒性是该领域的两个研究热点和挑战。为了解决卷积神经网络对嵌入式系统实时跟踪性能差、跟踪算法对复杂场景鲁棒性低的问题,本文提出了一种适用于嵌入式系统的快速、准确的实时视频检测与跟踪算法。该算法将深度卷积网络中单次多盒检测的目标检测模型与核相关滤波器跟踪算法相结合,并利用现场可编程门阵列加速单次多盒检测模型,满足了算法在嵌入式平台上的实时性。为了解决核相关滤波器算法在复杂场景下跟踪失败后的模型污染问题,提出了一种改进跟踪结果有效性检测机制的方法,解决了传统核相关滤波器算法长时间不能鲁棒跟踪的问题。针对单发多盒检测模型在运动模糊或光照变化情况下的漏检率高的问题,提出了一种降低漏检率的策略,有效地降低了漏检率。在嵌入式平台上的实验结果表明,该算法可以实现对视频中目标的实时跟踪,并且可以在目标跟踪失败后自动重新定位目标以继续跟踪。
{"title":"Real-time embedded object detection and tracking system in Zynq SoC","authors":"Qingbo Ji, Chong Dai, Changbo Hou, Xun Li","doi":"10.1186/s13640-021-00561-7","DOIUrl":"https://doi.org/10.1186/s13640-021-00561-7","url":null,"abstract":"<p>With the increasing application of computer vision technology in autonomous driving, robot, and other mobile devices, more and more attention has been paid to the implementation of target detection and tracking algorithms on embedded platforms. The real-time performance and robustness of algorithms are two hot research topics and challenges in this field. In order to solve the problems of poor real-time tracking performance of embedded systems using convolutional neural networks and low robustness of tracking algorithms for complex scenes, this paper proposes a fast and accurate real-time video detection and tracking algorithm suitable for embedded systems. The algorithm combines the object detection model of single-shot multibox detection in deep convolution networks and the kernel correlation filters tracking algorithm, what is more, it accelerates the single-shot multibox detection model using field-programmable gate arrays, which satisfies the real-time performance of the algorithm on the embedded platform. To solve the problem of model contamination after the kernel correlation filters algorithm fails to track in complex scenes, an improvement in the validity detection mechanism of tracking results is proposed that solves the problem of the traditional kernel correlation filters algorithm not being able to robustly track for a long time. In order to solve the problem that the missed rate of the single-shot multibox detection model is high under the conditions of motion blur or illumination variation, a strategy to reduce missed rate is proposed that effectively reduces the missed detection. The experimental results on the embedded platform show that the algorithm can achieve real-time tracking of the object in the video and can automatically reposition the object to continue tracking after the object tracking fails.</p>","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2021-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138518864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Single-frame super-resolution for remote sensing images based on improved deep recursive residual network 基于改进深度递归残差网络的遥感图像单帧超分辨率
IF 2.4 4区 计算机科学 Pub Date : 2021-05-24 DOI: 10.1186/s13640-021-00560-8
Jiali Tang, J. Zhang, Dan Chen, N. Al-Nabhan, Chenrong Huang
{"title":"Single-frame super-resolution for remote sensing images based on improved deep recursive residual network","authors":"Jiali Tang, J. Zhang, Dan Chen, N. Al-Nabhan, Chenrong Huang","doi":"10.1186/s13640-021-00560-8","DOIUrl":"https://doi.org/10.1186/s13640-021-00560-8","url":null,"abstract":"","PeriodicalId":49322,"journal":{"name":"Eurasip Journal on Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":2.4,"publicationDate":"2021-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13640-021-00560-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65718743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Eurasip Journal on Image and Video Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1