首页 > 最新文献

Signal Processing-Image Communication最新文献

英文 中文
Adversarial domain adaptation with Siamese network for video object cosegmentation 利用连体网络进行逆向域适应以实现视频对象共分割
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-02-15 DOI: 10.1016/j.image.2024.117109
Li Xu , Yaodong Zhou , Bing Luo , Bo Li , Chao Zhang

Object cosegmentation aims to obtain common objects from multiple images or videos, which performs by employing handcraft features to evaluate region similarity or learning higher semantic information via deep learning. However, the former based on handcraft features is sensitive to illumination, appearance changes and clutter background to the domain gap. The latter based on deep learning needs the groundtruth of object segmentation to train the co-attention model to spotlight the common object regions in different domain. This paper proposes an adversarial domain adaption-based video object cosegmentation method without any pixel-wise supervision. Intuitively, high-level semantic similarity are beneficial for common object recognition. However, there are inconsistency distributions of different video sources, i.e., domain gap. We propose an adversarial learning method to align feature distributions of different videos, which aims to maintain the feature similarity of common objects to overcome the dataset bias. Hence, a feature encoder via Siamese network is constructed to fool a discriminative network to obtain domain adapted feature mapping. To further assist the feature embedding of common objects, we define a latent task for label generation to train a classifying network, which could make full use of high-level semantic information. Experimental results on several video cosegmentation datasets suggest that domain adaption based on adversarial learning could significantly improve the common semantic feature exaction.

物体共分割旨在从多幅图像或视频中获取共同的物体,其方法是利用手工特征来评估区域相似性,或通过深度学习来学习更高级的语义信息。然而,前者基于手工特征,对光照、外观变化和杂波背景敏感,存在领域差距。后者基于深度学习,需要物体分割的基本事实来训练共同关注模型,以聚焦不同领域中的共同物体区域。本文提出了一种基于对抗域自适应的视频物体共分割方法,无需任何像素监督。直观地说,高层次的语义相似性有利于常见物体的识别。然而,不同视频源存在不一致的分布,即域差距。我们提出了一种对抗学习方法来调整不同视频的特征分布,旨在保持常见物体的特征相似性,克服数据集偏差。因此,我们通过连体网络构建了一个特征编码器,以愚弄一个判别网络,从而获得适应领域的特征映射。为了进一步帮助常见对象的特征嵌入,我们定义了一个用于生成标签的潜在任务来训练分类网络,从而充分利用高级语义信息。在多个视频共同分割数据集上的实验结果表明,基于对抗学习的领域自适应可以显著改善常见语义特征的提取。
{"title":"Adversarial domain adaptation with Siamese network for video object cosegmentation","authors":"Li Xu ,&nbsp;Yaodong Zhou ,&nbsp;Bing Luo ,&nbsp;Bo Li ,&nbsp;Chao Zhang","doi":"10.1016/j.image.2024.117109","DOIUrl":"10.1016/j.image.2024.117109","url":null,"abstract":"<div><p>Object cosegmentation aims to obtain common objects from multiple images or videos, which performs by employing handcraft features to evaluate region similarity or learning higher semantic information via deep learning. However, the former based on handcraft features is sensitive to illumination, appearance changes and clutter background to the domain gap. The latter based on deep learning needs the groundtruth of object segmentation to train the co-attention model to spotlight the common object regions in different domain. This paper proposes an adversarial domain adaption-based video object cosegmentation method without any pixel-wise supervision. Intuitively, high-level semantic similarity are beneficial for common object recognition. However, there are inconsistency distributions of different video sources, i.e., domain gap. We propose an adversarial learning method to align feature distributions of different videos, which aims to maintain the feature similarity of common objects to overcome the dataset bias. Hence, a feature encoder via Siamese network is constructed to fool a discriminative network to obtain domain adapted feature mapping. To further assist the feature embedding of common objects, we define a latent task for label generation to train a classifying network, which could make full use of high-level semantic information. Experimental results on several video cosegmentation datasets suggest that domain adaption based on adversarial learning could significantly improve the common semantic feature exaction.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"123 ","pages":"Article 117109"},"PeriodicalIF":3.5,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139897019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction-based coding with rate control for lossless region of interest in pathology imaging 基于预测的编码与速率控制,用于病理成像中的无损感兴趣区
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-22 DOI: 10.1016/j.image.2023.117087
Joan Bartrina-Rapesta , Miguel Hernández-Cabronero , Victor Sanchez , Joan Serra-Sagristà , Pouya Jamshidi , J. Castellani

Online collaborative tools for medical diagnosis produced from digital pathology images have experimented an increase in demand in recent years. Due to the large sizes of pathology images, rate control (RC) techniques that allow an accurate control of compressed file sizes are critical to meet existing bandwidth restrictions while maximizing retrieved image quality. Recently, some RC contributions to Region of Interest (RoI) coding for pathology imaging have been presented. These encode the RoI without loss and the background with some loss, and focus on providing high RC accuracy for the background area. However, none of these RC contributions deal efficiently with arbitrary RoI shapes, which hinders the accuracy of background definition and rate control. This manuscript presents a novel coding system based on prediction with a novel RC algorithm for RoI coding that allows arbitrary RoIs shapes. Compared to other methods of the state of the art, our proposed algorithm significantly improves upon their RC accuracy, while reducing the compressed data rate for the RoI by 30%. Furthermore, it offers higher quality in the reconstructed background areas, which has been linked to better clinical performance by expert pathologists. Finally, the proposed method also allows lossless compression of both the RoI and the background, producing data volumes 14% lower than coding techniques included in DICOM, such as HEVC and JPEG-LS.

近年来,利用数字病理图像进行医学诊断的在线协作工具的需求不断增加。由于病理图像体积庞大,能准确控制压缩文件大小的速率控制(RC)技术对于满足现有带宽限制同时最大限度地提高检索图像质量至关重要。最近,一些针对病理成像的感兴趣区(RoI)编码的速率控制技术已经问世。这些方法对感兴趣区(RoI)进行无损编码,对背景进行有损编码,并侧重于为背景区域提供较高的 RC 精确度。然而,这些 RC 解决方案都不能有效处理任意形状的 RoI,这就妨碍了背景定义和速率控制的准确性。本手稿介绍了一种基于预测的新型编码系统,该系统采用新型 RC 算法进行 RoI 编码,允许任意形状的 RoI。与现有的其他方法相比,我们提出的算法大大提高了其 RC 精确度,同时将 RoI 的压缩数据率降低了 30%。此外,它还能提供更高质量的重建背景区域,这与病理专家更好的临床表现息息相关。最后,所提出的方法还能对RoI和背景进行无损压缩,产生的数据量比DICOM中的编码技术(如HEVC和JPEG-LS)低14%。
{"title":"Prediction-based coding with rate control for lossless region of interest in pathology imaging","authors":"Joan Bartrina-Rapesta ,&nbsp;Miguel Hernández-Cabronero ,&nbsp;Victor Sanchez ,&nbsp;Joan Serra-Sagristà ,&nbsp;Pouya Jamshidi ,&nbsp;J. Castellani","doi":"10.1016/j.image.2023.117087","DOIUrl":"10.1016/j.image.2023.117087","url":null,"abstract":"<div><p>Online collaborative tools for medical diagnosis produced from digital pathology images have experimented an increase in demand in recent years. Due to the large sizes of pathology images, rate control (RC) techniques that allow an accurate control of compressed file sizes are critical to meet existing bandwidth restrictions while maximizing retrieved image quality. Recently, some RC contributions to Region of Interest (RoI) coding for pathology imaging have been presented. These encode the RoI without loss and the background with some loss, and focus on providing high RC accuracy for the background area. However, none of these RC contributions deal efficiently with arbitrary RoI shapes, which hinders the accuracy of background definition and rate control. This manuscript presents a novel coding system based on prediction with a novel RC algorithm for RoI coding that allows arbitrary RoIs shapes. Compared to other methods of the state of the art, our proposed algorithm significantly improves upon their RC accuracy, while reducing the compressed data rate for the RoI by 30%. Furthermore, it offers higher quality in the reconstructed background areas, which has been linked to better clinical performance by expert pathologists. Finally, the proposed method also allows lossless compression of both the RoI and the background, producing data volumes 14% lower than coding techniques included in DICOM, such as HEVC and JPEG-LS.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"123 ","pages":"Article 117087"},"PeriodicalIF":3.5,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0923596523001698/pdfft?md5=50ed387eb780e2fb3882b0d9944d5133&pid=1-s2.0-S0923596523001698-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139558222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dilated MultiRes Visual Attention U-Net for historical document image binarization 用于历史文献图像二值化的稀释多重影视觉注意力 U-Net
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-15 DOI: 10.1016/j.image.2024.117102
Nikolaos Detsikas, Nikolaos Mitianoudis, Nikolaos Papamarkos

The task of binarization of historical document images has been in the forefront of image processing research, during the digital transition of libraries. The process of storing and transcribing valuable historical printed or handwritten material can salvage world cultural heritage and make it available online without physical attendance. The task of binarization can be viewed as a pre-processing step that attempts to separate the printed/handwritten characters in the image from possible noise and stains, which will assist in the Optical Character Recognition (OCR) process. Many approaches have been proposed before, including deep learning based approaches. In this article, we propose a U-Net style deep learning architecture that incorporates many other developments of deep learning, including residual connections, multi-resolution connections, visual attention blocks and dilated convolution blocks for upsampling. The novelties in the proposed DMVAnet lie in the use of these elements in combination in a novel U-Net style architecture and the application of DMVAnet in image binarization for the first time. In addition, the proposed DMVAnet is a very computationally lightweight network that performs very close or even better than the state-of-the-art approaches with a fraction of the network size and parameters. Finally, it can be used on platforms with restricted processing power and system resources, such as mobile devices and through scaling can result in inference times that allow for real-time applications.

在图书馆数字化转型期间,历史文献图像的二值化任务一直处于图像处理研究的前沿。对珍贵的历史印刷或手写资料进行存储和转录的过程可以抢救世界文化遗产,并使其无需实物即可在线查阅。二值化任务可被视为一个预处理步骤,它试图将图像中的印刷/手写字符与可能存在的噪音和污点分离开来,这将有助于光学字符识别(OCR)过程。之前已经提出了许多方法,包括基于深度学习的方法。在本文中,我们提出了一种 U-Net 风格的深度学习架构,它融合了深度学习的许多其他发展,包括残差连接、多分辨率连接、视觉注意力块和用于上采样的扩张卷积块。拟议的 DMVAnet 的新颖之处在于将这些元素结合到一个新颖的 U-Net 式架构中,并首次将 DMVAnet 应用于图像二值化。此外,所提出的 DMVAnet 是一种计算量非常小的网络,其性能非常接近甚至优于最先进的方法,而网络规模和参数仅为后者的一小部分。最后,它可以在处理能力和系统资源有限的平台上使用,如移动设备,并且通过缩放可以实现实时应用的推理时间。
{"title":"A Dilated MultiRes Visual Attention U-Net for historical document image binarization","authors":"Nikolaos Detsikas,&nbsp;Nikolaos Mitianoudis,&nbsp;Nikolaos Papamarkos","doi":"10.1016/j.image.2024.117102","DOIUrl":"10.1016/j.image.2024.117102","url":null,"abstract":"<div><p><span><span>The task of binarization of historical </span>document images<span> has been in the forefront of image processing research, during the digital transition of libraries. The process of storing and transcribing valuable historical printed or handwritten material can salvage world cultural heritage and make it available online without physical attendance. The task of binarization can be viewed as a pre-processing step that attempts to separate the printed/handwritten characters in the image from possible noise and stains, which will assist in the </span></span>Optical Character Recognition<span><span> (OCR) process. Many approaches have been proposed before, including deep learning based approaches. In this article, we propose a U-Net style deep learning architecture that incorporates many other developments of deep learning, including residual connections, multi-resolution connections, visual attention blocks and dilated convolution blocks for upsampling. The novelties in the proposed DMVAnet lie in the use of these elements in combination in a novel U-Net style architecture and the application of DMVAnet in image binarization for the first time. In addition, the proposed DMVAnet is a very computationally lightweight network that performs very close or even better than the state-of-the-art approaches with a fraction of the network size and parameters. Finally, it can be used on platforms with restricted processing power and system resources, such as </span>mobile devices and through scaling can result in inference times that allow for real-time applications.</span></p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"122 ","pages":"Article 117102"},"PeriodicalIF":3.5,"publicationDate":"2024-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139470349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Concept drift challenge in multimedia anomaly detection: A case study with facial datasets 多媒体异常检测中的概念漂移挑战:面部数据集案例研究
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-08 DOI: 10.1016/j.image.2024.117100
Pratibha Kumari , Priyankar Choudhary , Vinit Kujur , Pradeep K. Atrey , Mukesh Saini

Anomaly detection in multimedia datasets is a widely studied area. Yet, the concept drift challenge in data has been ignored or poorly handled by the majority of the anomaly detection frameworks. The state-of-the-art approaches assume that the data distribution at training and deployment time will be the same. However, due to various real-life environmental factors, the data may encounter drift in its distribution or can drift from one class to another in the late future. Thus, a one-time trained model might not perform adequately. In this paper, we systematically investigate the effect of concept drift on various detection models and propose a modified Adaptive Gaussian Mixture Model (AGMM) based framework for anomaly detection in multimedia data. In contrast to the baseline AGMM, the proposed extension of AGMM remembers the past for a longer period in order to handle the drift better. Extensive experimental analysis shows that the proposed model better handles the drift in data as compared with the baseline AGMM. Further, to facilitate research and comparison with the proposed framework, we contribute three multimedia datasets constituting faces as samples. The face samples of individuals correspond to the age difference of more than ten years to incorporate a longer temporal context.

多媒体数据集的异常检测是一个被广泛研究的领域。然而,大多数异常检测框架都忽略了数据中的概念漂移挑战,或者处理不当。最先进的方法假设训练和部署时的数据分布是相同的。然而,由于现实生活中的各种环境因素,数据的分布可能会发生漂移,或者在后期从一个类别漂移到另一个类别。因此,一次性训练的模型可能无法充分发挥作用。在本文中,我们系统地研究了概念漂移对各种检测模型的影响,并提出了一种基于自适应高斯混杂模型(AGMM)的改进框架,用于多媒体数据的异常检测。与基线 AGMM 不同的是,为了更好地处理概念漂移,我们提出的 AGMM 扩展模型将过去的概念记忆更长的时间。广泛的实验分析表明,与基线 AGMM 相比,提议的模型能更好地处理数据漂移。此外,为了便于研究和比较所提出的框架,我们提供了三个以人脸为样本的多媒体数据集。这些人脸样本的年龄相差十多岁,因此具有更长的时间背景。
{"title":"Concept drift challenge in multimedia anomaly detection: A case study with facial datasets","authors":"Pratibha Kumari ,&nbsp;Priyankar Choudhary ,&nbsp;Vinit Kujur ,&nbsp;Pradeep K. Atrey ,&nbsp;Mukesh Saini","doi":"10.1016/j.image.2024.117100","DOIUrl":"10.1016/j.image.2024.117100","url":null,"abstract":"<div><p>Anomaly detection<span> in multimedia datasets is a widely studied area. Yet, the concept drift challenge in data has been ignored or poorly handled by the majority of the anomaly detection frameworks. The state-of-the-art approaches assume that the data distribution at training and deployment time will be the same. However, due to various real-life environmental factors, the data may encounter drift in its distribution or can drift from one class to another in the late future. Thus, a one-time trained model might not perform adequately. In this paper, we systematically investigate the effect of concept drift on various detection models and propose a modified Adaptive Gaussian Mixture Model (AGMM) based framework for anomaly detection in multimedia data. In contrast to the baseline AGMM, the proposed extension of AGMM remembers the past for a longer period in order to handle the drift better. Extensive experimental analysis shows that the proposed model better handles the drift in data as compared with the baseline AGMM. Further, to facilitate research and comparison with the proposed framework, we contribute three multimedia datasets constituting faces as samples. The face samples of individuals correspond to the age difference of more than ten years to incorporate a longer temporal context.</span></p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"123 ","pages":"Article 117100"},"PeriodicalIF":3.5,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139423693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FAVER: Blind quality prediction of variable frame rate videos FAVER:可变帧频视频的盲质量预测
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-08 DOI: 10.1016/j.image.2024.117101
Qi Zheng , Zhengzhong Tu , Pavan C. Madhusudana , Xiaoyang Zeng , Alan C. Bovik , Yibo Fan

Video quality assessment (VQA) remains an important and challenging problem that affects many applications at the widest scales. Recent advances in mobile devices and cloud computing techniques have made it possible to capture, process, and share high resolution, high frame rate (HFR) videos across the Internet nearly instantaneously. Being able to monitor and control the quality of these streamed videos can enable the delivery of more enjoyable content and perceptually optimized rate control. Accordingly, there is a pressing need to develop VQA models that can be deployed at enormous scales. While some recent effects have been applied to full-reference (FR) analysis of variable frame rate and HFR video quality, the development of no-reference (NR) VQA algorithms targeting frame rate variations has been little studied. Here, we propose a first-of-a-kind blind VQA model for evaluating HFR videos, which we dub the Framerate-Aware Video Evaluator w/o Reference (FAVER). FAVER uses extended models of spatial natural scene statistics that encompass space–time wavelet-decomposed video signals, and leverages the advantages of the deep neural network to provide motion perception, to conduct efficient frame rate sensitive quality prediction. Our extensive experiments on several HFR video quality datasets show that FAVER outperforms other blind VQA algorithms at a reasonable computational cost. To facilitate reproducible research and public evaluation, an implementation of FAVER is being made freely available online: https://github.com/uniqzheng/HFR-BVQA.

视频质量评估(VQA)仍然是一个重要而具有挑战性的问题,它影响着最广泛的许多应用。移动设备和云计算技术的最新进展使得在互联网上捕捉、处理和共享高分辨率、高帧率(HFR)视频几乎成为可能。如果能够监控这些流媒体视频的质量,就能提供更多令人愉悦的内容,并优化感知速率控制。因此,迫切需要开发可大规模部署的 VQA 模型。虽然最近的一些效果已被应用于对可变帧频和高帧频视频质量的全参考(FR)分析,但针对帧频变化的无参考(NR)VQA 算法的开发却鲜有研究。在此,我们首次提出了一种用于评估 HFR 视频的盲 VQA 模型,并将其命名为 "无参考帧率感知视频评估器"(FAVER)。FAVER 使用包含时空小波分解视频信号的空间自然场景统计扩展模型,并利用深度神经网络提供运动感知的优势,进行高效的帧速率敏感质量预测。我们在多个 HFR 视频质量数据集上进行的大量实验表明,FAVER 以合理的计算成本优于其他盲 VQA 算法。为了促进可复制的研究和公共评估,FAVER 的实现可在网上免费获取:https://github.com/uniqzheng/HFR-BVQA。
{"title":"FAVER: Blind quality prediction of variable frame rate videos","authors":"Qi Zheng ,&nbsp;Zhengzhong Tu ,&nbsp;Pavan C. Madhusudana ,&nbsp;Xiaoyang Zeng ,&nbsp;Alan C. Bovik ,&nbsp;Yibo Fan","doi":"10.1016/j.image.2024.117101","DOIUrl":"10.1016/j.image.2024.117101","url":null,"abstract":"<div><p><span><span>Video quality assessment (VQA) remains an important and challenging problem that affects many applications at the widest scales. Recent advances in mobile devices<span><span> and cloud computing techniques have made it possible to capture, process, and share high resolution, high frame rate (HFR) videos across the Internet nearly instantaneously. Being able to monitor and control the quality of these streamed videos can enable the delivery of more enjoyable content and perceptually optimized rate control. Accordingly, there is a pressing need to develop VQA models that can be deployed at enormous scales. While some recent effects have been applied to full-reference (FR) analysis of variable frame rate and HFR video quality, the development of no-reference (NR) VQA algorithms targeting frame rate variations has been little studied. Here, we propose a first-of-a-kind blind VQA model for evaluating HFR videos, which we dub the Framerate-Aware Video </span>Evaluator w/o Reference (FAVER). FAVER uses extended models of spatial natural scene statistics that encompass space–time wavelet-decomposed video signals, and leverages the advantages of the </span></span>deep neural network to provide motion perception, to conduct efficient frame rate sensitive quality prediction. Our extensive experiments on several HFR video quality datasets show that FAVER outperforms other blind VQA algorithms at a reasonable computational cost. To facilitate reproducible research and public evaluation, an implementation of FAVER is being made freely available online: </span><span>https://github.com/uniqzheng/HFR-BVQA</span><svg><path></path></svg>.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"122 ","pages":"Article 117101"},"PeriodicalIF":3.5,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139422016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stereo vision based systems for sea-state measurement and floating structures monitoring 基于立体视觉的海况测量和浮动结构监测系统
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-06 DOI: 10.1016/j.image.2023.117088
Omar Sallam, Rihui Feng, Jack Stason, Xinguo Wang, Mirjam Fürth

Using computer vision techniques such as stereo vision systems for sea state measurement or for offshore structures monitoring can improve the measurement fidelity and accuracy with no significant additional cost. In this paper, two experiments (in-lab/open-sea) are conducted to study the performance of stereo vision system to measure the water wave surface elevation and rigid body heaving motion. For the in-lab experiment, regular water waves are generated in a wave tank for different frequencies and wave heights, where the water surface is scanned by the stereo vision camera installed on the top of the tank. Surface elevation inferred by the stereo vision is verified by an installed stationary side camera that records the water surface through the tank transparent side window, water surface elevation measured by the side camera recordings is extracted using edge detection algorithm. During the in-lab experiment a heaving buoy is installed to test the performance of Visual Simultaneous Localization and Mapping (VSLAM) algorithm to monitor the buoy heave motion. The VSLAM algorithm fuses a buoy onboard stereo vision recordings with an embedded Inertial Measurement Unit (IMU) to estimate the 6-DOF of a rigid body. The Buoy motion VSLAM measurements are verified by a KLT tracking algorithm implemented on the video recordings of the stationary side camera. The open-sea experiment is implemented in Lake Somerville, Texas. The stereo vision system is installed to measure the water surface elevation and directional spectrum of the wind generated irregular waves. The open-sea wave measurements by the stereo vision are verified by a Sofar commercial wave buoys deployed in the testing location.

使用计算机视觉技术(如立体视觉系统)进行海况测量或近海结构监测,可以在不增加大量成本的情况下提高测量的保真度和准确性。本文进行了两次实验(实验室内/开放海域),研究立体视觉系统测量水波表面高程和刚体翻腾运动的性能。在实验室内实验中,在波浪槽中产生不同频率和波高的规则水波,安装在波浪槽顶部的立体视觉相机对水面进行扫描。立体视觉推断出的水面高程由安装的固定侧置摄像头验证,该摄像头通过水箱透明侧窗记录水面情况,利用边缘检测算法提取侧置摄像头记录测得的水面高程。在实验室内实验中,安装了一个起伏浮标,以测试视觉同步定位和绘图(VSLAM)算法的性能,从而监测浮标的起伏运动。VSLAM 算法将浮标上的立体视觉记录与嵌入式惯性测量单元 (IMU) 相结合,以估计刚体的 6-DOF 运动。浮标运动 VSLAM 测量结果通过在固定侧摄像头视频记录上实施的 KLT 跟踪算法进行验证。公海实验在得克萨斯州萨默维尔湖进行。安装的立体视觉系统用于测量水面高程和风力产生的不规则波浪的方向谱。立体视觉系统测量到的公海波浪由部署在测试地点的 Sofar 商业波浪浮标进行验证。
{"title":"Stereo vision based systems for sea-state measurement and floating structures monitoring","authors":"Omar Sallam,&nbsp;Rihui Feng,&nbsp;Jack Stason,&nbsp;Xinguo Wang,&nbsp;Mirjam Fürth","doi":"10.1016/j.image.2023.117088","DOIUrl":"10.1016/j.image.2023.117088","url":null,"abstract":"<div><p><span>Using computer vision<span> techniques such as stereo vision systems for sea state measurement or for </span></span>offshore structures<span><span> monitoring can improve the measurement fidelity<span> and accuracy with no significant additional cost. In this paper, two experiments (in-lab/open-sea) are conducted to study the performance of stereo vision system to measure the water wave surface elevation and rigid body heaving motion. For the in-lab experiment, regular water waves are generated in a wave tank for different frequencies and wave heights, where the water surface is scanned by the stereo vision camera installed on the top of the tank. Surface elevation inferred by the stereo vision is verified by an installed stationary side camera that records the water surface through the tank transparent side window, water surface elevation measured by the side camera recordings is extracted using edge detection algorithm. During the in-lab experiment a heaving buoy is installed to test the performance of Visual Simultaneous </span></span>Localization<span> and Mapping (VSLAM) algorithm to monitor the buoy heave motion. The VSLAM algorithm fuses a buoy onboard stereo vision recordings with an embedded Inertial Measurement Unit<span> (IMU) to estimate the 6-DOF of a rigid body. The Buoy motion VSLAM measurements are verified by a KLT tracking algorithm implemented on the video recordings of the stationary side camera. The open-sea experiment is implemented in Lake Somerville, Texas. The stereo vision system is installed to measure the water surface elevation and directional spectrum of the wind generated irregular waves. The open-sea wave measurements by the stereo vision are verified by a Sofar commercial wave buoys deployed in the testing location.</span></span></span></p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"122 ","pages":"Article 117088"},"PeriodicalIF":3.5,"publicationDate":"2024-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139374052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing fine-detail image synthesis from text descriptions by text aggregation and connection fusion module 通过文本聚合和连接融合模块,从文本描述中加强精细图像合成
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-01-02 DOI: 10.1016/j.image.2023.117099
Huaping Zhou , Tao Wu , Senmao Ye , Xinru Qin , Kelei Sun

Synthesizing images with fine details from text descriptions is a challenge. The existing single-stage generative adversarial networks (GANs) fuse sentence features into the image generation process through affine transformation, which alleviate the problems of missing details and large computation from stacked networks. However, existing single-stage networks ignore the word features in the text description, resulting in a lack of detail in the generated image. To address this issue, we proposed a text aggregation module (TAM) to fuse sentence features and word features in a text by a simple spatial attention mechanism. Then we built a text connection fusion (TCF) block consisting mainly of gated recurrent unit (GRU) and up-sampled block. It can connect text features used in the up-sampled blocks to improve text utilization. Besides, to further improve the semantic consistency between text and the generated images, we introduce the deep attentional multimodal similarity model (DAMSM) loss, which monitors the similarity between text and improves semantic consistency. Experimental results prove that our method is superior to the state-of-the-art models on the CUB and COCO datasets, regarding both image fidelity and semantic consistency with the text.

根据文字描述合成具有精细细节的图像是一项挑战。现有的单级生成式对抗网络(GAN)通过仿射变换将句子特征融合到图像生成过程中,从而缓解了堆叠网络所带来的细节缺失和计算量大的问题。然而,现有的单级网络忽略了文本描述中的单词特征,导致生成的图像缺乏细节。为解决这一问题,我们提出了文本聚合模块(TAM),通过简单的空间注意机制融合文本中的句子特征和单词特征。然后,我们建立了一个文本连接融合(TCF)模块,主要由门控递归单元(GRU)和上采样模块组成。它可以连接上采样块中使用的文本特征,提高文本利用率。此外,为了进一步提高文本与生成图像之间的语义一致性,我们引入了深度注意多模态相似性模型(DAMSM)损失,它可以监测文本之间的相似性并提高语义一致性。实验结果证明,在 CUB 和 COCO 数据集上,我们的方法在图像保真度和与文本的语义一致性方面都优于最先进的模型。
{"title":"Enhancing fine-detail image synthesis from text descriptions by text aggregation and connection fusion module","authors":"Huaping Zhou ,&nbsp;Tao Wu ,&nbsp;Senmao Ye ,&nbsp;Xinru Qin ,&nbsp;Kelei Sun","doi":"10.1016/j.image.2023.117099","DOIUrl":"10.1016/j.image.2023.117099","url":null,"abstract":"<div><p><span><span>Synthesizing images with fine details from text descriptions is a challenge. The existing single-stage generative adversarial networks<span> (GANs) fuse sentence features into the image generation process through affine transformation, which alleviate the problems of missing details and large computation from stacked networks. However, existing single-stage networks ignore the word features in the text description, resulting in a lack of detail in the generated image. To address this issue, we proposed a text aggregation module (TAM) to fuse sentence features and word features in a text by a simple spatial </span></span>attention mechanism. Then we built a text connection fusion (TCF) block consisting mainly of gated </span>recurrent<span> unit (GRU) and up-sampled block. It can connect text features used in the up-sampled blocks to improve text utilization. Besides, to further improve the semantic consistency between text and the generated images, we introduce the deep attentional multimodal similarity model (DAMSM) loss, which monitors the similarity between text and improves semantic consistency. Experimental results prove that our method is superior to the state-of-the-art models on the CUB and COCO datasets, regarding both image fidelity and semantic consistency with the text.</span></p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"122 ","pages":"Article 117099"},"PeriodicalIF":3.5,"publicationDate":"2024-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139093167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing the effect of shot noise in indirect Time-of-Flight cameras 分析间接飞行时间照相机拍摄噪声的影响
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-12-29 DOI: 10.1016/j.image.2023.117089
Nofre Sanmartin-Vich , Javier Calpe , Filiberto Pla

Continuous wave indirect Time-of-Flight cameras obtain depth images by emitting a modulated continuous light wave and measuring the delay of the received signal. In this paper we generalize the estimation of the effect of the shot noise when obtaining the phase delay with an arbitrary number of points in the Discrete Fourier Transform, extending and generalizing the analysis done in previous works for the case of four points. For that particular case, we compare our analysis with the state of art. Moreover, we extend the error model using a second order approximation in the error propagation analysis, which provides more accurate estimations according to the Montecarlo simulation experiments. The analysis, based on both analytical and numerical methods, shows that the phase error is, in general, related to the exposure time and weakly to the number of points in the Discrete Fourier Transform. It also depends on the background illumination level, on the amplitude of the received signal, and, when using a three point DFT, on the distance to the objects.

连续波间接飞行时间照相机通过发射调制连续光波并测量接收信号的延迟来获取深度图像。在本文中,我们将离散傅里叶变换中任意点数的相位延迟的估算方法进行了扩展和归纳,并对之前针对四点情况所做的分析进行了概括。针对这种特殊情况,我们将分析结果与最新技术进行了比较。此外,我们还扩展了误差模型,在误差传播分析中使用了二阶近似值,根据蒙特卡洛模拟实验,该近似值提供了更精确的估计。基于分析和数值方法的分析表明,相位误差一般与曝光时间有关,与离散傅里叶变换中的点数关系不大。它还取决于背景光照度、接收信号的振幅,以及使用三点离散傅里叶变换时与物体的距离。
{"title":"Analyzing the effect of shot noise in indirect Time-of-Flight cameras","authors":"Nofre Sanmartin-Vich ,&nbsp;Javier Calpe ,&nbsp;Filiberto Pla","doi":"10.1016/j.image.2023.117089","DOIUrl":"10.1016/j.image.2023.117089","url":null,"abstract":"<div><p>Continuous wave indirect Time-of-Flight cameras obtain depth images by emitting a modulated continuous light wave and measuring the delay of the received signal. In this paper we generalize the estimation of the effect of the shot noise when obtaining the phase delay with an arbitrary number of points in the Discrete Fourier Transform<span>, extending and generalizing the analysis done in previous works for the case of four points. For that particular case, we compare our analysis with the state of art. Moreover, we extend the error model using a second order approximation in the error propagation analysis, which provides more accurate estimations according to the Montecarlo simulation experiments. The analysis, based on both analytical and numerical methods, shows that the phase error is, in general, related to the exposure time and weakly to the number of points in the Discrete Fourier Transform. It also depends on the background illumination level, on the amplitude of the received signal, and, when using a three point DFT, on the distance to the objects.</span></p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"122 ","pages":"Article 117089"},"PeriodicalIF":3.5,"publicationDate":"2023-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139065281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantitative analysis of facial soft tissue using weighted cascade regression model applicable for facial plastic surgery 利用适用于面部整形手术的加权级联回归模型对面部软组织进行定量分析
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-12-05 DOI: 10.1016/j.image.2023.117086
Ali Fahmi Jafargholkhanloo, Mousa Shamsi

Localization of facial landmarks plays an important role in the measurement of facial metrics applicable for beauty analysis and facial plastic surgery. The first step in detecting facial landmarks is to estimate the face bounding box. Clinical images of patients' faces usually show intensity non-uniformity. These conditions cause common face detection algorithms do not perform well in face detection under varying illumination. To solve this problem, a modified fuzzy c-means (MFCM) algorithm is used under varying illumination modeling. The cascade regression method (CRM) has an appropriate performance in face alignment. This algorithm has two main drawbacks. (1) In the training phase, increasing the real data without considering normal data can lead to over-fitting. To solve this problem, a weighted CRM (WCRM) is presented. (2) In the test phase, using a mean shape causes the initial shape to be either near to or far from the face shape. To overcome this problem, a Procrustes-based analysis is presented. One of the most important steps in facial landmark localization is feature extraction. In this study, to increase detection accuracy of the cephalometric landmarks, local phase quantization (LPQ) is used for feature extraction in all three channels of RGB color space. Finally, the proposed algorithm is used to measure facial anthropometric metrics. Experimental results show that the proposed algorithm has a better performance in facial landmark localization than other compared algorithms.

在测量适用于美容分析和面部整形手术的面部指标时,面部地标的定位起着重要作用。检测面部地标的第一步是估计面部边界框。患者面部的临床图像通常显示出强度不均匀性。这些情况导致普通的人脸检测算法在不同光照下的人脸检测效果不佳。为了解决这个问题,在不同光照建模下使用了改进的模糊 c-means 算法(MFCM)。级联回归法(CRM)在人脸配准方面具有适当的性能。该算法有两个主要缺点。(1) 在训练阶段,增加真实数据而不考虑正常数据会导致过度拟合。为了解决这个问题,提出了一种加权 CRM(WCRM)。(2) 在测试阶段,使用平均形状会导致初始形状接近或远离脸部形状。为了克服这一问题,提出了一种基于 Procrustes 的分析方法。面部地标定位最重要的步骤之一是特征提取。在本研究中,为了提高头颅测量地标的检测准确性,在 RGB 色彩空间的所有三个通道中都使用了局部相位量化(LPQ)进行特征提取。最后,提出的算法被用于测量面部人体测量指标。实验结果表明,与其他同类算法相比,所提出的算法在面部地标定位方面具有更好的性能。
{"title":"Quantitative analysis of facial soft tissue using weighted cascade regression model applicable for facial plastic surgery","authors":"Ali Fahmi Jafargholkhanloo,&nbsp;Mousa Shamsi","doi":"10.1016/j.image.2023.117086","DOIUrl":"10.1016/j.image.2023.117086","url":null,"abstract":"<div><p>Localization of facial landmarks plays an important role in the measurement of facial metrics applicable for beauty analysis and facial plastic surgery. The first step in detecting facial landmarks is to estimate the face bounding box. Clinical images of patients' faces usually show intensity non-uniformity. These conditions cause common face detection algorithms do not perform well in face detection under varying illumination. To solve this problem, a modified fuzzy c-means (MFCM) algorithm is used under varying illumination modeling. The cascade regression method (CRM) has an appropriate performance in face alignment. This algorithm has two main drawbacks. (1) In the training phase, increasing the real data without considering normal data can lead to over-fitting. To solve this problem, a weighted CRM (WCRM) is presented. (2) In the test phase, using a mean shape causes the initial shape to be either near to or far from the face shape. To overcome this problem, a Procrustes-based analysis is presented. One of the most important steps in facial landmark localization is feature extraction. In this study, to increase detection accuracy of the cephalometric landmarks, local phase quantization (LPQ) is used for feature extraction in all three channels of RGB color space. Finally, the proposed algorithm is used to measure facial anthropometric metrics. Experimental results show that the proposed algorithm has a better performance in facial landmark localization than other compared algorithms.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"121 ","pages":"Article 117086"},"PeriodicalIF":3.5,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138547351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DSRNet: Depth Super-Resolution Network guided by blurry depth and clear intensity edges DSRNet:由模糊深度和清晰强度边缘引导的深度超分辨率网络
IF 3.5 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2023-11-28 DOI: 10.1016/j.image.2023.117064
Hui Lan, Cheolkon Jung

Although high resolution (HR) depth images are required in many applications such as virtual reality and autonomous navigation, their resolution and quality generated by consumer depth cameras fall short of the requirements. Existing depth upsampling methods focus on extracting multiscale features of HR color image to guide low resolution (LR) depth upsampling, thus causing blurry and inaccurate edges in depth. In this paper, we propose a depth super-resolution (SR) network guided by blurry depth and clear intensity edges, called DSRNet. DSRNet differentiates effective edges from a number of HR edges with the guidance of blurry depth and clear intensity edges. First, we perform global residual estimation based on an encoder–decoder architecture to extract edge structure from HR color image for depth SR. Then, we distinguish effective edges from HR edges in the decoder side with the guidance of LR depth upsampling. To maintain edges for depth SR, we use intensity edge guidance that extracts clear intensity edges from HR image. Finally, we use residual loss to generate accurate high frequency (HF) residual and reconstruct HR depth maps. Experimental results show that DSRNet successfully reconstructs depth edges in SR results as well as outperforms the state-of-the-art methods in terms of visual quality and quantitative measurements.1

尽管虚拟现实和自主导航等许多应用都需要高分辨率(HR)深度图像,但消费级深度相机生成的深度图像的分辨率和质量却达不到要求。现有的深度升采样方法主要是提取高分辨率彩色图像的多尺度特征来指导低分辨率(LR)深度升采样,因此会造成深度边缘模糊和不准确。在本文中,我们提出了一种由模糊深度和清晰强度边缘引导的深度超分辨率(SR)网络,称为 DSRNet。DSRNet 在模糊深度和清晰强度边缘的引导下,从大量 HR 边缘中区分出有效边缘。首先,我们基于编码器-解码器架构进行全局残差估计,从高清彩色图像中提取深度 SR 的边缘结构。然后,在解码器侧,我们以 LR 深度上采样为指导,将有效边缘与 HR 边缘区分开来。为了保持深度 SR 的边缘,我们使用强度边缘引导,从 HR 图像中提取清晰的强度边缘。最后,我们使用残差损耗来生成精确的高频(HF)残差,并重建 HR 深度图。实验结果表明,DSRNet 成功地重建了 SR 结果中的深度边缘,并在视觉质量和定量测量方面优于最先进的方法。
{"title":"DSRNet: Depth Super-Resolution Network guided by blurry depth and clear intensity edges","authors":"Hui Lan,&nbsp;Cheolkon Jung","doi":"10.1016/j.image.2023.117064","DOIUrl":"https://doi.org/10.1016/j.image.2023.117064","url":null,"abstract":"<div><p><span><span>Although high resolution (HR) depth images are required in many applications such as virtual reality and autonomous navigation<span>, their resolution and quality generated by consumer depth cameras fall short of the requirements. Existing depth upsampling methods focus on extracting multiscale features of HR color image to guide low resolution (LR) depth upsampling, thus causing blurry and inaccurate edges in depth. In this paper, we propose a depth super-resolution (SR) network guided by blurry depth and clear intensity edges, called DSRNet. DSRNet differentiates effective edges from a number of HR edges with the guidance of blurry depth and clear intensity edges. First, we perform global residual estimation based on an encoder–decoder architecture to extract edge structure from HR color image for depth SR. Then, we distinguish effective edges from HR edges in the decoder side with the guidance of LR depth upsampling. To maintain edges for depth SR, we use intensity edge guidance that extracts clear intensity edges from HR image. Finally, we use residual loss to generate accurate high frequency (HF) residual and reconstruct HR depth maps. Experimental results show that DSRNet successfully reconstructs depth edges in SR results as well as outperforms the state-of-the-art methods in terms of visual quality and </span></span>quantitative measurements.</span><span><sup>1</sup></span></p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"121 ","pages":"Article 117064"},"PeriodicalIF":3.5,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138490174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Signal Processing-Image Communication
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1