首页 > 最新文献

Signal Processing-Image Communication最新文献

英文 中文
Prototype-wise self-knowledge distillation for few-shot segmentation 以原型为导向,提炼自我知识,进行少量细分
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-08-21 DOI: 10.1016/j.image.2024.117186
Yadang Chen , Xinyu Xu , Chenchen Wei , Chuhan Lu

Few-shot segmentation was proposed to obtain segmentation results for a image with an unseen class by referring to a few labeled samples. However, due to the limited number of samples, many few-shot segmentation models suffer from poor generalization. Prototypical network-based few-shot segmentation still has issues with spatial inconsistency and prototype bias. Since the target class has different appearance in each image, some specific features in the prototypes generated from the support image and its mask do not accurately reflect the generalized features of the target class. To address the support prototype consistency issue, we put forward two modules: Data Augmentation Self-knowledge Distillation (DASKD) and Prototype-wise Regularization (PWR). The DASKD module focuses on enhancing spatial consistency by using data augmentation and self-knowledge distillation. Self-knowledge distillation helps the model acquire generalized features of the target class and learn hidden knowledge from the support images. The PWR module focuses on obtaining a more representative support prototype by conducting prototype-level loss to obtain support prototypes closer to the category center. Broad evaluation experiments on PASCAL-5i and COCO-20i demonstrate that our model outperforms the prior works on few-shot segmentation. Our approach surpasses the state of the art by 7.5% in PASCAL-5i and 4.2% in COCO-20i.

少数镜头分割法的提出是为了通过参考少数标注样本来获得未见类别图像的分割结果。然而,由于样本数量有限,许多少数镜头分割模型都存在泛化能力差的问题。基于原型网络的少拍分割仍然存在空间不一致和原型偏差的问题。由于目标类别在每幅图像中都有不同的外观,因此由支持图像及其掩膜生成的原型中的某些特定特征并不能准确反映目标类别的概括特征。为了解决支持原型一致性问题,我们提出了两个模块:数据增强自知蒸馏(DASKD)和原型正则化(PWR)。DASKD 模块的重点是通过数据扩增和自我知识提炼来增强空间一致性。自知提炼有助于模型获取目标类别的通用特征,并从支持图像中学习隐藏知识。PWR 模块的重点是通过原型级损耗获得更具代表性的支持原型,从而获得更接近类别中心的支持原型。在 PASCAL-5i 和 COCO-20i 上进行的广泛评估实验表明,我们的模型在少镜头分割方面优于之前的研究成果。在 PASCAL-5i 和 COCO-20i 中,我们的方法分别比现有技术高出 7.5% 和 4.2%。
{"title":"Prototype-wise self-knowledge distillation for few-shot segmentation","authors":"Yadang Chen ,&nbsp;Xinyu Xu ,&nbsp;Chenchen Wei ,&nbsp;Chuhan Lu","doi":"10.1016/j.image.2024.117186","DOIUrl":"10.1016/j.image.2024.117186","url":null,"abstract":"<div><p>Few-shot segmentation was proposed to obtain segmentation results for a image with an unseen class by referring to a few labeled samples. However, due to the limited number of samples, many few-shot segmentation models suffer from poor generalization. Prototypical network-based few-shot segmentation still has issues with spatial inconsistency and prototype bias. Since the target class has different appearance in each image, some specific features in the prototypes generated from the support image and its mask do not accurately reflect the generalized features of the target class. To address the support prototype consistency issue, we put forward two modules: Data Augmentation Self-knowledge Distillation (DASKD) and Prototype-wise Regularization (PWR). The DASKD module focuses on enhancing spatial consistency by using data augmentation and self-knowledge distillation. Self-knowledge distillation helps the model acquire generalized features of the target class and learn hidden knowledge from the support images. The PWR module focuses on obtaining a more representative support prototype by conducting prototype-level loss to obtain support prototypes closer to the category center. Broad evaluation experiments on PASCAL-<span><math><msup><mrow><mn>5</mn></mrow><mrow><mi>i</mi></mrow></msup></math></span> and COCO-<span><math><mrow><mn>2</mn><msup><mrow><mn>0</mn></mrow><mrow><mi>i</mi></mrow></msup></mrow></math></span> demonstrate that our model outperforms the prior works on few-shot segmentation. Our approach surpasses the state of the art by 7.5% in PASCAL-<span><math><msup><mrow><mn>5</mn></mrow><mrow><mi>i</mi></mrow></msup></math></span> and 4.2% in COCO-<span><math><mrow><mn>2</mn><msup><mrow><mn>0</mn></mrow><mrow><mi>i</mi></mrow></msup></mrow></math></span>.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"129 ","pages":"Article 117186"},"PeriodicalIF":3.4,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142077049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer-CNN for small image object detection 用于小图像对象检测的变换器-CNN
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-08-21 DOI: 10.1016/j.image.2024.117194
Yan-Lin Chen , Chun-Liang Lin , Yu-Chen Lin , Tzu-Chun Chen

Object recognition in computer vision technology has been a popular research field in recent years. Although the detection success rate of regular objects has achieved impressive results, small object detection (SOD) is still a challenging issue. In the Microsoft Common Objects in Context (MS COCO) public dataset, the detection rate of small objects is typically half that of regular-sized objects. The main reason is that small objects are often affected by multi-layer convolution and pooling, leading to insufficient details to distinguish them from the background or similar objects, resulting in poor recognition rates or even no results. This paper presents a network architecture, Transformer-CNN, that combines a self-attention mechanism-based transformer and a convolutional neural network (CNN) to improve the recognition rate of SOD. It captures global information through a transformer and uses the translation invariance and translation equivalence of CNN to maximize the retention of global and local features while improving the reliability and robustness of SOD. Our experiments show that the proposed model improves the small object recognition rate by 2∼5 % than the general transformer architectures.

近年来,计算机视觉技术中的物体识别是一个热门研究领域。虽然常规物体的检测成功率已经取得了令人瞩目的成果,但小物体检测(SOD)仍然是一个具有挑战性的问题。在 Microsoft Common Objects in Context(MS COCO)公共数据集中,小物体的检测率通常只有常规尺寸物体的一半。主要原因是小物体通常会受到多层卷积和池化的影响,导致细节不足,无法将其与背景或类似物体区分开来,从而导致识别率低下,甚至没有结果。本文提出了一种网络架构--变压器-CNN,它结合了基于自注意机制的变压器和卷积神经网络(CNN),以提高 SOD 的识别率。它通过变压器捕捉全局信息,并利用 CNN 的翻译不变性和翻译等价性最大限度地保留全局和局部特征,同时提高 SOD 的可靠性和鲁棒性。我们的实验表明,与一般的变换器架构相比,所提出的模型可将小物体识别率提高 2∼5%。
{"title":"Transformer-CNN for small image object detection","authors":"Yan-Lin Chen ,&nbsp;Chun-Liang Lin ,&nbsp;Yu-Chen Lin ,&nbsp;Tzu-Chun Chen","doi":"10.1016/j.image.2024.117194","DOIUrl":"10.1016/j.image.2024.117194","url":null,"abstract":"<div><p>Object recognition in computer vision technology has been a popular research field in recent years. Although the detection success rate of regular objects has achieved impressive results, small object detection (SOD) is still a challenging issue. In the Microsoft Common Objects in Context (MS COCO) public dataset, the detection rate of small objects is typically half that of regular-sized objects. The main reason is that small objects are often affected by multi-layer convolution and pooling, leading to insufficient details to distinguish them from the background or similar objects, resulting in poor recognition rates or even no results. This paper presents a network architecture, Transformer-CNN, that combines a self-attention mechanism-based transformer and a convolutional neural network (CNN) to improve the recognition rate of SOD. It captures global information through a transformer and uses the translation invariance and translation equivalence of CNN to maximize the retention of global and local features while improving the reliability and robustness of SOD. Our experiments show that the proposed model improves the small object recognition rate by 2∼5 % than the general transformer architectures.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"129 ","pages":"Article 117194"},"PeriodicalIF":3.4,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142044684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature extractor optimization for discriminative representations in Generalized Category Discovery 优化特征提取器,实现广义类别发现中的判别表征
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-08-17 DOI: 10.1016/j.image.2024.117195
Zhonghao Chang, Xiao Li, Zihao Zhao

Generalized Category Discovery (GCD) task involves transferring knowledge from labeled known categories to recognize both known and novel categories within an unlabeled dataset. A significant challenge arises from the lack of prior information for novel categories. To address this, we develop a feature extractor that can learn discriminative features for both known and novel categories. Our approach leverages the observation that similar samples often belong to the same class. We construct a similarity matrix and employ similarity contrastive loss to increase the similarity between similar samples in the feature space. Additionally, we incorporate cluster labels to further refine the feature extractor, utilizing K-means clustering to assign these labels to unlabeled data, providing valuable supervision. Our feature extractor is optimized through the utilization of instance-level contrastive learning and class-level contrastive learning constraints. These constraints promote similarity maximization in both the instance space and the label space for instances sharing the same pseudo-labels. These three components complement each other, facilitating the learning of discriminative representations for both known and novel categories. Through comprehensive evaluations of generic image recognition datasets and challenging fine-grained datasets, we demonstrate that our proposed method achieves state-of-the-art performance.

广义类别发现(GCD)任务涉及从已标记的已知类别中转移知识,以识别未标记数据集中的已知类别和新类别。由于缺乏新类别的先验信息,因此面临着巨大的挑战。为了解决这个问题,我们开发了一种特征提取器,可以学习已知类别和新类别的鉴别特征。我们的方法利用了相似样本通常属于同一类别这一观察结果。我们构建了一个相似性矩阵,并采用相似性对比损失来增加特征空间中相似样本之间的相似性。此外,我们还结合集群标签来进一步完善特征提取器,利用 K-means 聚类将这些标签分配给未标记的数据,从而提供有价值的监督。通过利用实例级对比学习和类级对比学习约束,我们的特征提取器得到了优化。对于共享相同伪标签的实例,这些约束可促进实例空间和标签空间的相似性最大化。这三个部分相辅相成,促进了已知类别和新类别的判别表征学习。通过对一般图像识别数据集和具有挑战性的细粒度数据集的全面评估,我们证明了我们提出的方法达到了最先进的性能。
{"title":"Feature extractor optimization for discriminative representations in Generalized Category Discovery","authors":"Zhonghao Chang,&nbsp;Xiao Li,&nbsp;Zihao Zhao","doi":"10.1016/j.image.2024.117195","DOIUrl":"10.1016/j.image.2024.117195","url":null,"abstract":"<div><p>Generalized Category Discovery (GCD) task involves transferring knowledge from labeled known categories to recognize both known and novel categories within an unlabeled dataset. A significant challenge arises from the lack of prior information for novel categories. To address this, we develop a feature extractor that can learn discriminative features for both known and novel categories. Our approach leverages the observation that similar samples often belong to the same class. We construct a similarity matrix and employ similarity contrastive loss to increase the similarity between similar samples in the feature space. Additionally, we incorporate cluster labels to further refine the feature extractor, utilizing K-means clustering to assign these labels to unlabeled data, providing valuable supervision. Our feature extractor is optimized through the utilization of instance-level contrastive learning and class-level contrastive learning constraints. These constraints promote similarity maximization in both the instance space and the label space for instances sharing the same pseudo-labels. These three components complement each other, facilitating the learning of discriminative representations for both known and novel categories. Through comprehensive evaluations of generic image recognition datasets and challenging fine-grained datasets, we demonstrate that our proposed method achieves state-of-the-art performance.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"129 ","pages":"Article 117195"},"PeriodicalIF":3.4,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142020716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image-based virtual try-on: Fidelity and simplification 基于图像的虚拟试穿:保真和简化
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-08-16 DOI: 10.1016/j.image.2024.117189
Tasin Islam, Alina Miron, Xiaohui Liu, Yongmin Li

We introduce a novel image-based virtual try-on model designed to replace a candidate’s garment with a desired target item. The proposed model comprises three modules: segmentation, garment warping, and candidate-clothing fusion. Previous methods have shown limitations in cases involving significant differences between the original and target clothing, as well as substantial overlapping of body parts. Our model addresses these limitations by employing two key strategies. Firstly, it utilises a candidate representation based on an RGB skeleton image to enhance spatial relationships among body parts, resulting in robust segmentation and improved occlusion handling. Secondly, truncated U-Net is employed in both the segmentation and warping modules, enhancing segmentation performance and accelerating the try-on process. The warping module leverages an efficient affine transform for ease of training. Comparative evaluations against state-of-the-art models demonstrate the competitive performance of our proposed model across various scenarios, particularly excelling in handling occlusion cases and significant differences in clothing cases. This research presents a promising solution for image-based virtual try-on, advancing the field by overcoming key limitations and achieving superior performance.

我们介绍了一种新颖的基于图像的虚拟试穿模型,旨在将候选人的服装替换为所需的目标物品。该模型由三个模块组成:分割、服装变形和候选人-服装融合。以往的方法在原始服装和目标服装之间存在显著差异以及身体部位大量重叠的情况下显示出局限性。我们的模型通过采用两个关键策略来解决这些局限性。首先,它利用基于 RGB 骨架图像的候选表示来增强身体部位之间的空间关系,从而实现稳健的分割并改进遮挡处理。其次,在分割和翘曲模块中都采用了截断 U-Net,从而提高了分割性能并加速了试穿过程。翘曲模块利用高效的仿射变换,便于训练。与最先进模型的比较评估表明,我们提出的模型在各种情况下都具有很强的竞争力,尤其是在处理遮挡情况和服装差异较大的情况时表现出色。这项研究为基于图像的虚拟试穿提供了一个前景广阔的解决方案,通过克服关键限制和实现卓越性能,推动了该领域的发展。
{"title":"Image-based virtual try-on: Fidelity and simplification","authors":"Tasin Islam,&nbsp;Alina Miron,&nbsp;Xiaohui Liu,&nbsp;Yongmin Li","doi":"10.1016/j.image.2024.117189","DOIUrl":"10.1016/j.image.2024.117189","url":null,"abstract":"<div><p>We introduce a novel image-based virtual try-on model designed to replace a candidate’s garment with a desired target item. The proposed model comprises three modules: segmentation, garment warping, and candidate-clothing fusion. Previous methods have shown limitations in cases involving significant differences between the original and target clothing, as well as substantial overlapping of body parts. Our model addresses these limitations by employing two key strategies. Firstly, it utilises a candidate representation based on an RGB skeleton image to enhance spatial relationships among body parts, resulting in robust segmentation and improved occlusion handling. Secondly, truncated U-Net is employed in both the segmentation and warping modules, enhancing segmentation performance and accelerating the try-on process. The warping module leverages an efficient affine transform for ease of training. Comparative evaluations against state-of-the-art models demonstrate the competitive performance of our proposed model across various scenarios, particularly excelling in handling occlusion cases and significant differences in clothing cases. This research presents a promising solution for image-based virtual try-on, advancing the field by overcoming key limitations and achieving superior performance.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"129 ","pages":"Article 117189"},"PeriodicalIF":3.4,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0923596524000900/pdfft?md5=d7b74bcca8966cd1d3e0e38fa30c8482&pid=1-s2.0-S0923596524000900-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142049928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Duration-aware and mode-aware micro-expression spotting for long video sequences 针对长视频序列的时长感知和模式感知微表情定位
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-08-10 DOI: 10.1016/j.image.2024.117192
Jing Liu , Xin Li , Jiaqi Zhang , Guangtao Zhai , Yuting Su , Yuyi Zhang , Bo Wang

Micro-expressions (MEs) are unconscious, instant and slight facial movements, revealing people’s true emotions. Locating MEs is a prerequisite of classifying them, while only a few researches focus on this task. Among them, sliding window based methods are the most prevalent. Due to the differences of individual physiological and psychological mechanisms, and some uncontrollable factors, the durations and transition modes of different MEs fluctuate greatly. Limited to fixed window scale and mode, traditional sliding window based ME spotting methods fail to capture the motion changes of all MEs exactly, resulting in performance degradation. In this paper, an ensemble learning based duration & mode-aware (DMA) ME spotting framework is proposed. Specifically, we exploit multiple sliding windows of different scales and modes to generate multiple weak detectors, each of which accommodates to MEs with certain duration and transition mode. Additionally, to get a more comprehensive strong detector, we integrate the analysis results of multiple weak detectors using a voting based aggregation module. Furthermore, a novel interval generation scheme is designed to merge close peaks and their neighbor frames into a complete ME interval. Experimental results on two long video databases show the promising performance of our proposed DMA framework compared with state-of-the-art methods. The codes are available at https://github.com/TJUMMG/DMA-ME-Spotting.

微表情(ME)是一种无意识的、瞬间的、轻微的面部动作,它揭示了人们的真实情感。定位微表情是对微表情进行分类的前提,但目前只有少数研究关注这一任务。其中,基于滑动窗口的方法最为普遍。由于个体生理和心理机制的差异以及一些不可控因素,不同 ME 的持续时间和转换模式波动很大。受限于固定的窗口尺度和模式,传统的基于滑动窗口的 ME 定位方法无法准确捕捉到所有 ME 的运动变化,导致性能下降。本文提出了一种基于集合学习的时长& 模式感知(DMA)ME 定位框架。具体来说,我们利用不同尺度和模式的多个滑动窗口来生成多个弱检测器,每个检测器都适用于具有特定持续时间和过渡模式的 ME。此外,为了得到更全面的强检测器,我们使用基于投票的聚合模块整合了多个弱检测器的分析结果。此外,我们还设计了一种新颖的时间间隔生成方案,可将接近的峰值及其邻近帧合并为一个完整的 ME 时间间隔。在两个长视频数据库上的实验结果表明,与最先进的方法相比,我们提出的 DMA 框架具有良好的性能。代码见 https://github.com/TJUMMG/DMA-ME-Spotting。
{"title":"Duration-aware and mode-aware micro-expression spotting for long video sequences","authors":"Jing Liu ,&nbsp;Xin Li ,&nbsp;Jiaqi Zhang ,&nbsp;Guangtao Zhai ,&nbsp;Yuting Su ,&nbsp;Yuyi Zhang ,&nbsp;Bo Wang","doi":"10.1016/j.image.2024.117192","DOIUrl":"10.1016/j.image.2024.117192","url":null,"abstract":"<div><p>Micro-expressions (MEs) are unconscious, instant and slight facial movements, revealing people’s true emotions. Locating MEs is a prerequisite of classifying them, while only a few researches focus on this task. Among them, sliding window based methods are the most prevalent. Due to the differences of individual physiological and psychological mechanisms, and some uncontrollable factors, the durations and transition modes of different MEs fluctuate greatly. Limited to fixed window scale and mode, traditional sliding window based ME spotting methods fail to capture the motion changes of all MEs exactly, resulting in performance degradation. In this paper, an ensemble learning based duration &amp; mode-aware (DMA) ME spotting framework is proposed. Specifically, we exploit multiple sliding windows of different scales and modes to generate multiple weak detectors, each of which accommodates to MEs with certain duration and transition mode. Additionally, to get a more comprehensive strong detector, we integrate the analysis results of multiple weak detectors using a voting based aggregation module. Furthermore, a novel interval generation scheme is designed to merge close peaks and their neighbor frames into a complete ME interval. Experimental results on two long video databases show the promising performance of our proposed DMA framework compared with state-of-the-art methods. The codes are available at <span><span>https://github.com/TJUMMG/DMA-ME-Spotting</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"129 ","pages":"Article 117192"},"PeriodicalIF":3.4,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142049929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low-rank tensor completion based on tensor train rank with partially overlapped sub-blocks and total variation 基于具有部分重叠子块和总变化的张量列车等级的低等级张量补全
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-08-10 DOI: 10.1016/j.image.2024.117193
Jingfei He, Zezhong Yang, Xunan Zheng, Xiaoyue Zhang, Ao Li

Recently, the low-rank tensor completion method based on tensor train (TT) rank has achieved promising performance. Ket augmentation (KA) is commonly used in TT rank-based methods to improve the performance by converting low-dimensional tensors to higher-dimensional tensors. However, block artifacts are caused since KA also destroys the original structure and image continuity of original low-dimensional tensors. To tackle this issue, a low-rank tensor completion method based on TT rank with tensor augmentation by partially overlapped sub-blocks (TAPOS) and total variation (TV) is proposed in this paper. The proposed TAPOS preserves the image continuity of the original tensor and enhances the low-rankness of the generated higher-dimensional tensors, and a weighted de-augmentation method is used to assign different weights to the elements of sub-blocks and further reduce the block artifacts. To further alleviate the block artifacts and improve reconstruction accuracy, TV is introduced in the TAPOS-based model to add the piecewise smooth prior. The parallel matrix decomposition method is introduced to estimate the TT rank to reduce the computational cost. Numerical experiments show that the proposed method outperforms the existing state-of-the-art methods.

最近,基于张量列车(TT)秩的低秩张量补全方法取得了良好的性能。Ket augmentation(KA)通常用于基于 TT 秩的方法,通过将低维张量转换为高维张量来提高性能。然而,由于 KA 也会破坏原始低维张量的原始结构和图像连续性,因此会产生块状伪影。为解决这一问题,本文提出了一种基于 TT 秩的低秩张量补全方法,该方法通过部分重叠子块(TAPOS)和总变异(TV)进行张量增强。本文提出的 TAPOS 既保留了原始张量的图像连续性,又增强了生成的高维张量的低秩性,并采用加权去增量方法为子块元素分配不同权重,进一步减少了块伪影。为了进一步减轻块伪影并提高重建精度,在基于 TAPOS 的模型中引入了 TV,以添加片断平滑先验。此外,还引入了并行矩阵分解法来估计 TT 的秩,以降低计算成本。数值实验表明,所提出的方法优于现有的先进方法。
{"title":"Low-rank tensor completion based on tensor train rank with partially overlapped sub-blocks and total variation","authors":"Jingfei He,&nbsp;Zezhong Yang,&nbsp;Xunan Zheng,&nbsp;Xiaoyue Zhang,&nbsp;Ao Li","doi":"10.1016/j.image.2024.117193","DOIUrl":"10.1016/j.image.2024.117193","url":null,"abstract":"<div><p>Recently, the low-rank tensor completion method based on tensor train (TT) rank has achieved promising performance. Ket augmentation (KA) is commonly used in TT rank-based methods to improve the performance by converting low-dimensional tensors to higher-dimensional tensors. However, block artifacts are caused since KA also destroys the original structure and image continuity of original low-dimensional tensors. To tackle this issue, a low-rank tensor completion method based on TT rank with tensor augmentation by partially overlapped sub-blocks (TAPOS) and total variation (TV) is proposed in this paper. The proposed TAPOS preserves the image continuity of the original tensor and enhances the low-rankness of the generated higher-dimensional tensors, and a weighted de-augmentation method is used to assign different weights to the elements of sub-blocks and further reduce the block artifacts. To further alleviate the block artifacts and improve reconstruction accuracy, TV is introduced in the TAPOS-based model to add the piecewise smooth prior. The parallel matrix decomposition method is introduced to estimate the TT rank to reduce the computational cost. Numerical experiments show that the proposed method outperforms the existing state-of-the-art methods.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"129 ","pages":"Article 117193"},"PeriodicalIF":3.4,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142049912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HDR-ChipQA: No-reference quality assessment on High Dynamic Range videos HDR-ChipQA:高动态范围视频的无参考质量评估
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-08-10 DOI: 10.1016/j.image.2024.117191
Joshua P. Ebenezer , Zaixi Shang , Yongjun Wu , Hai Wei , Sriram Sethuraman , Alan C. Bovik

We present a no-reference video quality model and algorithm that delivers standout performance for High Dynamic Range (HDR) videos, which we call HDR-ChipQA. HDR videos represent wider ranges of luminances, details, and colors than Standard Dynamic Range (SDR) videos. The growing adoption of HDR in massively scaled video networks has driven the need for video quality assessment (VQA) algorithms that better account for distortions on HDR content. In particular, standard VQA models may fail to capture conspicuous distortions at the extreme ends of the dynamic range, because the features that drive them may be dominated by distortions that pervade the mid-ranges of the signal. We introduce a new approach whereby a local expansive nonlinearity emphasizes distortions occurring at the higher and lower ends of the local luma range, allowing for the definition of additional quality-aware features that are computed along a separate path. These features are not HDR-specific, and also improve VQA on SDR video contents, albeit to a reduced degree. We show that this preprocessing step significantly boosts the power of distortion-sensitive natural video statistics (NVS) features when used to predict the quality of HDR content. In similar manner, we separately compute novel wide-gamut color features using the same nonlinear processing steps. We have found that our model significantly outperforms SDR VQA algorithms on the only publicly available, comprehensive HDR database, while also attaining state-of-the-art performance on SDR content.

我们提出了一种无参考视频质量模型和算法,可为高动态范围(HDR)视频提供出色的性能,我们称之为 HDR-ChipQA。与标准动态范围 (SDR) 视频相比,HDR 视频代表了更宽的亮度、细节和色彩范围。随着 HDR 在大规模视频网络中的应用日益广泛,人们需要能更好地考虑 HDR 内容失真的视频质量评估 (VQA) 算法。特别是,标准的 VQA 模型可能无法捕捉到动态范围极端两端的明显失真,因为驱动这些失真的特征可能被信号中间范围的失真所主导。我们引入了一种新方法,通过局部扩展非线性来强调发生在局部 luma 范围较高端和较低端的失真,从而可以定义额外的质量感知特征,这些特征会沿着单独的路径进行计算。这些特征并非针对 HDR,它们也能改善 SDR 视频内容的 VQA,只是程度有所降低。我们的研究表明,在用于预测 HDR 内容质量时,这一预处理步骤能显著提高对失真敏感的自然视频统计(NVS)特征的能力。同样,我们使用相同的非线性处理步骤,分别计算了新颖的广义色彩特征。我们发现,在唯一公开的综合 HDR 数据库上,我们的模型明显优于 SDR VQA 算法,同时在 SDR 内容上也达到了最先进的性能。
{"title":"HDR-ChipQA: No-reference quality assessment on High Dynamic Range videos","authors":"Joshua P. Ebenezer ,&nbsp;Zaixi Shang ,&nbsp;Yongjun Wu ,&nbsp;Hai Wei ,&nbsp;Sriram Sethuraman ,&nbsp;Alan C. Bovik","doi":"10.1016/j.image.2024.117191","DOIUrl":"10.1016/j.image.2024.117191","url":null,"abstract":"<div><p>We present a no-reference video quality model and algorithm that delivers standout performance for High Dynamic Range (HDR) videos, which we call HDR-ChipQA. HDR videos represent wider ranges of luminances, details, and colors than Standard Dynamic Range (SDR) videos. The growing adoption of HDR in massively scaled video networks has driven the need for video quality assessment (VQA) algorithms that better account for distortions on HDR content. In particular, standard VQA models may fail to capture conspicuous distortions at the extreme ends of the dynamic range, because the features that drive them may be dominated by distortions that pervade the mid-ranges of the signal. We introduce a new approach whereby a local expansive nonlinearity emphasizes distortions occurring at the higher and lower ends of the local luma range, allowing for the definition of additional quality-aware features that are computed along a separate path. These features are not HDR-specific, and also improve VQA on SDR video contents, albeit to a reduced degree. We show that this preprocessing step significantly boosts the power of distortion-sensitive natural video statistics (NVS) features when used to predict the quality of HDR content. In similar manner, we separately compute novel wide-gamut color features using the same nonlinear processing steps. We have found that our model significantly outperforms SDR VQA algorithms on the only publicly available, comprehensive HDR database, while also attaining state-of-the-art performance on SDR content.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"129 ","pages":"Article 117191"},"PeriodicalIF":3.4,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142049930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A virtual-reality spatial matching algorithm and its application on equipment maintenance support: System design and user study 虚拟现实空间匹配算法及其在设备维护支持中的应用:系统设计和用户研究
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-08-10 DOI: 10.1016/j.image.2024.117188
Xiao Yang , Fanghao Huang , Jiacheng Jiang , Zheng Chen

Equipment maintenance support is an important technical measure to maintain the equipment’s expected performance. However, the current maintenance supports are mainly completed by maintainers under the guidance of technical manual or additional experts, which may be insufficient for some advanced equipment with rapid update rate and complex inner structure. The rising technology of augmented reality (AR) provides a new solution for equipment maintenance support, while one of the key issues limiting the practical application of AR in maintenance field is the spatial matching issue between virtual space and reality space. In this paper, a virtual-reality spatial matching algorithm is designed to accurately superimpose the virtual information to the corresponding actual scene on the AR glasses. In this algorithm, two methods are proposed to help achieve the stable matching of virtual space and reality space. In detail, to obtain the saliency map with less background interference and improved saliency detection accuracy, a saliency detection method is designed based on the super-pixel segmentation. To deal with the problems of uneven distribution on the feature points and weak robustness to the light changes, a feature extraction and matching method is proposed for acquiring the feature point matching set with the utilization of the obtained saliency map. Finally, an immersive equipment maintenance support system (IEMSS) is developed based on this spatial matching algorithm, which provides the maintainers with immediate and immersive guidance to improve the efficiency and safety in the maintenance task, as well as offers maintenance training for inexperienced maintainers with expanded virtual information in case of limited experts. Several comparative experiments are implemented to verify the effectiveness of proposed methods, and a user study of real system application is carried out to further evaluate the superiority of these methods when applied in the IEMSS.

设备维护保障是保持设备预期性能的重要技术措施。然而,目前的维护支持主要由维护人员在技术手册或其他专家的指导下完成,这对于一些更新速度快、内部结构复杂的先进设备来说可能是不够的。日益兴起的增强现实(AR)技术为设备维护支持提供了新的解决方案,而制约增强现实在维护领域实际应用的关键问题之一是虚拟空间与现实空间的空间匹配问题。本文设计了一种虚拟现实空间匹配算法,将虚拟信息准确叠加到 AR 眼镜上对应的实际场景中。在该算法中,提出了两种方法来帮助实现虚拟空间和现实空间的稳定匹配。具体来说,为了获得背景干扰更少、显著性检测精度更高的显著性图,设计了一种基于超像素分割的显著性检测方法。针对特征点分布不均匀、对光线变化鲁棒性弱等问题,提出了一种特征提取和匹配方法,利用得到的显著性图获取特征点匹配集。最后,基于该空间匹配算法开发了一个沉浸式设备维护支持系统(IEMSS),为维护人员提供即时和沉浸式指导,以提高维护任务的效率和安全性,并在专家有限的情况下,通过扩展虚拟信息为缺乏经验的维护人员提供维护培训。为了验证所提方法的有效性,我们进行了多次对比实验,并对实际系统应用进行了用户研究,以进一步评估这些方法在 IEMSS 中应用的优越性。
{"title":"A virtual-reality spatial matching algorithm and its application on equipment maintenance support: System design and user study","authors":"Xiao Yang ,&nbsp;Fanghao Huang ,&nbsp;Jiacheng Jiang ,&nbsp;Zheng Chen","doi":"10.1016/j.image.2024.117188","DOIUrl":"10.1016/j.image.2024.117188","url":null,"abstract":"<div><p>Equipment maintenance support is an important technical measure to maintain the equipment’s expected performance. However, the current maintenance supports are mainly completed by maintainers under the guidance of technical manual or additional experts, which may be insufficient for some advanced equipment with rapid update rate and complex inner structure. The rising technology of augmented reality (AR) provides a new solution for equipment maintenance support, while one of the key issues limiting the practical application of AR in maintenance field is the spatial matching issue between virtual space and reality space. In this paper, a virtual-reality spatial matching algorithm is designed to accurately superimpose the virtual information to the corresponding actual scene on the AR glasses. In this algorithm, two methods are proposed to help achieve the stable matching of virtual space and reality space. In detail, to obtain the saliency map with less background interference and improved saliency detection accuracy, a saliency detection method is designed based on the super-pixel segmentation. To deal with the problems of uneven distribution on the feature points and weak robustness to the light changes, a feature extraction and matching method is proposed for acquiring the feature point matching set with the utilization of the obtained saliency map. Finally, an immersive equipment maintenance support system (IEMSS) is developed based on this spatial matching algorithm, which provides the maintainers with immediate and immersive guidance to improve the efficiency and safety in the maintenance task, as well as offers maintenance training for inexperienced maintainers with expanded virtual information in case of limited experts. Several comparative experiments are implemented to verify the effectiveness of proposed methods, and a user study of real system application is carried out to further evaluate the superiority of these methods when applied in the IEMSS.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"129 ","pages":"Article 117188"},"PeriodicalIF":3.4,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141998380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A ‘deep’ review of video super-resolution 视频超分辨率的 "深度 "回顾
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-27 DOI: 10.1016/j.image.2024.117175
Subhadra Gopalakrishnan, Anustup Choudhury

Video super-resolution (VSR) is an ill-posed inverse problem where the goal is to obtain high-resolution video content from a low-resolution counterpart. In this survey, we trace the history of video super-resolution techniques beginning with traditional methods, showing the evolution towards techniques that use shallow networks and finally, the recent trends where deep learning algorithms result in state-of-the-art performance. Specifically, we consider 60 neural network-based VSR techniques in addition to 8 traditional VSR techniques. We extensively cover the deep learning-based techniques including the latest models and introduce a novel taxonomy depending on their architecture. We discuss the pros and cons of each category of techniques. We consider the various components of the problem including the choice of loss functions, evaluation criteria and the benchmark datasets used for evaluation. We present a comparison of the existing techniques using common datasets, providing insights into the relative rankings of these methods. We compare the network architectures based on their computation speed and the network complexity. We also discuss the limitations of existing loss functions and the evaluation criteria that are currently used and propose alternate suggestions. Finally, we identify some of the current challenges and provide future research directions towards video super-resolution, thus providing a comprehensive understanding of the problem.

视频超分辨率(VSR)是一个难以解决的逆问题,其目标是从低分辨率的对应图像中获取高分辨率的视频内容。在本调查中,我们从传统方法开始,追溯了视频超分辨率技术的历史,展示了使用浅层网络的技术的演变,最后是深度学习算法带来最先进性能的最新趋势。具体来说,除了 8 种传统 VSR 技术外,我们还考虑了 60 种基于神经网络的 VSR 技术。我们广泛介绍了基于深度学习的技术,包括最新的模型,并根据其架构引入了一种新的分类方法。我们讨论了各类技术的优缺点。我们考虑了问题的各个组成部分,包括损失函数的选择、评估标准和用于评估的基准数据集。我们使用常见数据集对现有技术进行了比较,从而深入了解了这些方法的相对排名。我们根据计算速度和网络复杂度对网络架构进行了比较。我们还讨论了现有损失函数和当前使用的评估标准的局限性,并提出了替代建议。最后,我们指出了当前面临的一些挑战,并提供了视频超分辨率的未来研究方向,从而提供了对该问题的全面理解。
{"title":"A ‘deep’ review of video super-resolution","authors":"Subhadra Gopalakrishnan,&nbsp;Anustup Choudhury","doi":"10.1016/j.image.2024.117175","DOIUrl":"10.1016/j.image.2024.117175","url":null,"abstract":"<div><p>Video super-resolution (VSR) is an ill-posed inverse problem where the goal is to obtain high-resolution video content from a low-resolution counterpart. In this survey, we trace the history of video super-resolution techniques beginning with traditional methods, showing the evolution towards techniques that use shallow networks and finally, the recent trends where deep learning algorithms result in state-of-the-art performance. Specifically, we consider 60 neural network-based VSR techniques in addition to 8 traditional VSR techniques. We extensively cover the deep learning-based techniques including the latest models and introduce a novel taxonomy depending on their architecture. We discuss the pros and cons of each category of techniques. We consider the various components of the problem including the choice of loss functions, evaluation criteria and the benchmark datasets used for evaluation. We present a comparison of the existing techniques using common datasets, providing insights into the relative rankings of these methods. We compare the network architectures based on their computation speed and the network complexity. We also discuss the limitations of existing loss functions and the evaluation criteria that are currently used and propose alternate suggestions. Finally, we identify some of the current challenges and provide future research directions towards video super-resolution, thus providing a comprehensive understanding of the problem.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"129 ","pages":"Article 117175"},"PeriodicalIF":3.4,"publicationDate":"2024-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141852723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive review of quality of experience for emerging video services 全面审查新兴视频服务的体验质量
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-26 DOI: 10.1016/j.image.2024.117176
Weiling Chen , Fengquan Lan , Hongan Wei , Tiesong Zhao , Wei Liu , Yiwen Xu

The recent advances in multimedia technology have significantly expanded the range of audio–visual applications. The continuous enhancement of display quality has led to the emergence of new attributes in video, such as enhanced visual immersion and widespread availability. Within media content, the video signals are presented in various formats including stereoscopic/3D, panoramic/360°and holographic images. The signals are also combined with other sensory elements, such as audio, tactile, and olfactory cues, creating a comprehensive multi-sensory experience for the user. The development of both qualitative and quantitative Quality of Experience (QoE) metrics is crucial for enhancing the subjective experience in immersive scenarios, providing valuable guidelines for system enhancement. In this paper, we review the most recent achievements in QoE assessment for immersive scenarios, summarize the current challenges related to QoE issues, and present outlooks of QoE applications in these scenarios. The aim of our overview is to offer a valuable reference for researchers in the domain of multimedia delivery.

多媒体技术的最新进展极大地扩展了视听应用的范围。显示质量的不断提高使视频出现了新的特性,如增强的视觉沉浸感和广泛的可用性。在媒体内容中,视频信号以各种格式呈现,包括立体/3D、全景/360° 和全息图像。这些信号还与音频、触觉和嗅觉线索等其他感官元素相结合,为用户创造出全面的多感官体验。定性和定量体验质量(QoE)指标的开发对于增强身临其境场景中的主观体验至关重要,可为系统增强提供宝贵的指导。在本文中,我们将回顾身临其境场景 QoE 评估的最新成果,总结当前与 QoE 问题相关的挑战,并展望 QoE 在这些场景中的应用。我们的综述旨在为多媒体传输领域的研究人员提供有价值的参考。
{"title":"A comprehensive review of quality of experience for emerging video services","authors":"Weiling Chen ,&nbsp;Fengquan Lan ,&nbsp;Hongan Wei ,&nbsp;Tiesong Zhao ,&nbsp;Wei Liu ,&nbsp;Yiwen Xu","doi":"10.1016/j.image.2024.117176","DOIUrl":"10.1016/j.image.2024.117176","url":null,"abstract":"<div><p>The recent advances in multimedia technology have significantly expanded the range of audio–visual applications. The continuous enhancement of display quality has led to the emergence of new attributes in video, such as enhanced visual immersion and widespread availability. Within media content, the video signals are presented in various formats including stereoscopic/3D, panoramic/360°and holographic images. The signals are also combined with other sensory elements, such as audio, tactile, and olfactory cues, creating a comprehensive multi-sensory experience for the user. The development of both qualitative and quantitative Quality of Experience (QoE) metrics is crucial for enhancing the subjective experience in immersive scenarios, providing valuable guidelines for system enhancement. In this paper, we review the most recent achievements in QoE assessment for immersive scenarios, summarize the current challenges related to QoE issues, and present outlooks of QoE applications in these scenarios. The aim of our overview is to offer a valuable reference for researchers in the domain of multimedia delivery.</p></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"128 ","pages":"Article 117176"},"PeriodicalIF":3.4,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141840586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Signal Processing-Image Communication
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1