首页 > 最新文献

Multimedia Systems最新文献

英文 中文
Dual convolutional neural network with attention for image blind denoising 用于图像盲去噪的具有注意力的双卷积神经网络
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-08 DOI: 10.1007/s00530-024-01469-8
Wencong Wu, Guannan Lv, Yingying Duan, Peng Liang, Yungang Zhang, Yuelong Xia

Noise removal of images is an essential preprocessing procedure for many computer vision tasks. Currently, many denoising models based on deep neural networks can perform well in removing the noise with known distributions (i.e. the additive Gaussian white noise). However eliminating real noise is still a very challenging task, since real-world noise often does not simply follow one single type of distribution, and the noise may spatially vary. In this paper, we present a novel dual convolutional neural network (CNN) with attention for image blind denoising, named as the DCANet. To the best of our knowledge, the proposed DCANet is the first work that integrates both the dual CNN and attention mechanism for image denoising. The DCANet is composed of a noise estimation network, a spatial and channel attention module (SCAM), and a dual CNN. The noise estimation network is utilized to estimate the spatial distribution and the noise level in an image. The noisy image and its estimated noise are combined as the input of the SCAM, and a dual CNN contains two different branches is designed to learn the complementary features to obtain the denoised image. The experimental results have verified that the proposed DCANet can suppress both synthetic and real noise effectively. The code of DCANet is available at https://github.com/WenCongWu/DCANet.

图像去噪是许多计算机视觉任务中必不可少的预处理程序。目前,许多基于深度神经网络的去噪模型可以很好地去除已知分布的噪声(即加性高斯白噪声)。然而,消除真实噪声仍然是一项极具挑战性的任务,因为现实世界中的噪声往往并不简单地遵循单一类型的分布,而且噪声可能在空间上存在差异。在本文中,我们提出了一种用于图像盲去噪的新型双卷积神经网络(CNN),并将其命名为 DCANet。据我们所知,所提出的 DCANet 是首个集成了双卷积神经网络和注意力机制用于图像去噪的工作。DCANet 由噪声估计网络、空间和通道注意模块(SCAM)以及双 CNN 组成。噪声估计网络用于估计图像中的空间分布和噪声水平。噪声图像及其估计噪声被组合起来作为 SCAM 的输入,双 CNN 包含两个不同的分支,用于学习互补特征以获得去噪图像。实验结果验证了所提出的 DCANet 能有效抑制合成噪声和真实噪声。DCANet 的代码见 https://github.com/WenCongWu/DCANet。
{"title":"Dual convolutional neural network with attention for image blind denoising","authors":"Wencong Wu, Guannan Lv, Yingying Duan, Peng Liang, Yungang Zhang, Yuelong Xia","doi":"10.1007/s00530-024-01469-8","DOIUrl":"https://doi.org/10.1007/s00530-024-01469-8","url":null,"abstract":"<p>Noise removal of images is an essential preprocessing procedure for many computer vision tasks. Currently, many denoising models based on deep neural networks can perform well in removing the noise with known distributions (i.e. the additive Gaussian white noise). However eliminating real noise is still a very challenging task, since real-world noise often does not simply follow one single type of distribution, and the noise may spatially vary. In this paper, we present a novel dual convolutional neural network (CNN) with attention for image blind denoising, named as the DCANet. To the best of our knowledge, the proposed DCANet is the first work that integrates both the dual CNN and attention mechanism for image denoising. The DCANet is composed of a noise estimation network, a spatial and channel attention module (SCAM), and a dual CNN. The noise estimation network is utilized to estimate the spatial distribution and the noise level in an image. The noisy image and its estimated noise are combined as the input of the SCAM, and a dual CNN contains two different branches is designed to learn the complementary features to obtain the denoised image. The experimental results have verified that the proposed DCANet can suppress both synthetic and real noise effectively. The code of DCANet is available at https://github.com/WenCongWu/DCANet.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"13 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physical-prior-guided single image dehazing network via unpaired contrastive learning 通过非配对对比学习实现物理先导的单一图像去毛刺网络
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-06 DOI: 10.1007/s00530-024-01462-1
Mawei Wu, Aiwen Jiang, Hourong Chen, Jihua Ye

Image dehazing aims to restore high fidelity clear images from hazy ones. It has wide applications on many intelligent image analysis systems in computer vision area. Many prior-based and learning-based methods have already made significant progress in this field. However, the domain gap between synthetic and real hazy images still negatively impacts model’s generalization performance in real-world scenarios. In this paper, we have proposed an effective physical-prior-guided single image dehazing network via unpaired contrastive learning (PDUNet). The learning process of PDUNet consists of pre-training stage on synthetic data and fine-tuning stage on real data. Mixed-prior modules, controllable zero-convolution modules, and unpaired contrastive regularization with hybrid transmission maps have been proposed to fully utilize complementary advantages of both prior-based and learning-based strategies. Specifically, mixed-prior module provides precise haze distributions. Zero-convolution modules serving as controllable bypass supplement pre-trained model with additional real-world haze information, as well as mitigate the risk of catastrophic forgetting during fine-tuning. Hybrid prior-generated transmission maps are employed for unpaired contrastive regularization. Through leveraging physical prior statistics and vast of unlabel real-data, the proposed PDUNet exhibits excellent generalization and adaptability on handling real-world hazy scenarios. Extensive experiments on public dataset have demonstrated that the proposed method improves PSNR,NIQE and BRISQUE values by an average of 0.33, 0.69 and 2.3, respectively, with comparable model efficiency compared to SOTA. Related codes and model parameters will be publicly available on Github https://github.com/Jotra9872/PDU-Net.

图像消隐的目的是从模糊的图像中还原出高保真的清晰图像。它在计算机视觉领域的许多智能图像分析系统中有着广泛的应用。许多基于先验和学习的方法已经在这一领域取得了重大进展。然而,合成图像和真实雾霾图像之间的领域差距仍然对模型在真实世界场景中的泛化性能产生负面影响。在本文中,我们提出了一种通过非配对对比学习(PDUNet)实现的有效的物理先验指导单幅图像去雾网络。PDUNet 的学习过程包括在合成数据上的预训练阶段和在真实数据上的微调阶段。为了充分利用基于先验和基于学习两种策略的互补优势,我们提出了混合先验模块、可控零卷积模块和非配对对比正则化混合传输图。具体来说,混合先验模块可提供精确的雾度分布。作为可控旁路的零卷积模块为预训练模型补充了额外的真实世界雾度信息,并降低了微调过程中灾难性遗忘的风险。混合先验生成的传输图用于非配对对比正则化。通过利用物理先验统计和大量无标签真实数据,所提出的 PDUNet 在处理真实世界的雾霾场景时表现出卓越的泛化和适应性。在公共数据集上进行的大量实验表明,与 SOTA 相比,所提方法的 PSNR、NIQE 和 BRISQUE 值平均分别提高了 0.33、0.69 和 2.3,模型效率相当。相关代码和模型参数将在 Github https://github.com/Jotra9872/PDU-Net 上公开。
{"title":"Physical-prior-guided single image dehazing network via unpaired contrastive learning","authors":"Mawei Wu, Aiwen Jiang, Hourong Chen, Jihua Ye","doi":"10.1007/s00530-024-01462-1","DOIUrl":"https://doi.org/10.1007/s00530-024-01462-1","url":null,"abstract":"<p>Image dehazing aims to restore high fidelity clear images from hazy ones. It has wide applications on many intelligent image analysis systems in computer vision area. Many prior-based and learning-based methods have already made significant progress in this field. However, the domain gap between synthetic and real hazy images still negatively impacts model’s generalization performance in real-world scenarios. In this paper, we have proposed an effective physical-prior-guided single image dehazing network via unpaired contrastive learning (PDUNet). The learning process of PDUNet consists of pre-training stage on synthetic data and fine-tuning stage on real data. Mixed-prior modules, controllable zero-convolution modules, and unpaired contrastive regularization with hybrid transmission maps have been proposed to fully utilize complementary advantages of both prior-based and learning-based strategies. Specifically, mixed-prior module provides precise haze distributions. Zero-convolution modules serving as controllable bypass supplement pre-trained model with additional real-world haze information, as well as mitigate the risk of catastrophic forgetting during fine-tuning. Hybrid prior-generated transmission maps are employed for unpaired contrastive regularization. Through leveraging physical prior statistics and vast of unlabel real-data, the proposed PDUNet exhibits excellent generalization and adaptability on handling real-world hazy scenarios. Extensive experiments on public dataset have demonstrated that the proposed method improves PSNR,NIQE and BRISQUE values by an average of 0.33, 0.69 and 2.3, respectively, with comparable model efficiency compared to SOTA. Related codes and model parameters will be publicly available on Github https://github.com/Jotra9872/PDU-Net.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"44 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring coherence from heterogeneous representations for OCR image captioning 从异构表征中探索连贯性,为 OCR 图像添加标题
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-06 DOI: 10.1007/s00530-024-01470-1
Yao Zhang, Zijie Song, Zhenzhen Hu

Text-based image captioning is an important task, aiming to generate descriptions based on reading and reasoning the scene texts in images. Text-based image contains both textual and visual information, which is difficult to be described comprehensively. Recent works fail to adequately model the relationship between features of different modalities and fine-grained alignment. Due to the multimodal characteristics of scene texts, the representations of text usually come from multiple encoders of visual and textual, leading to heterogeneous features. Though lots of works have paid attention to fuse features from different sources, they ignore the direct correlation between heterogeneous features, and the coherence in scene text has not been fully exploited. In this paper, we propose Heterogeneous Attention Module (HAM) to enhance the cross-modal representations of OCR tokens and devote it to text-based image captioning. The HAM is designed to capture the coherence between different modalities of OCR tokens and provide context-aware scene text representations to generate accurate image captions. To the best of our knowledge, we are the first to apply the heterogeneous attention mechanism to explore the coherence in OCR tokens for text-based image captioning. By calculating the heterogeneous similarity, we interactively enhance the alignment between visual and textual information in OCR. We conduct the experiments on the TextCaps dataset. Under the same setting, the results show that our model achieves competitive performances compared with the advanced methods and ablation study demonstrates that our framework enhances the original model in all metrics.

基于文本的图像标题是一项重要任务,旨在通过阅读和推理图像中的场景文本生成描述。基于文本的图像包含文本和视觉信息,很难对其进行全面描述。近期的研究未能充分模拟不同模态特征之间的关系和细粒度对齐。由于场景文本的多模态特征,文本的表征通常来自视觉和文本的多个编码器,从而导致特征的异构。虽然很多研究都注意融合不同来源的特征,但却忽略了异构特征之间的直接相关性,场景文本中的一致性没有得到充分利用。在本文中,我们提出了异构关注模块(HAM)来增强 OCR 标记的跨模态表示,并将其应用于基于文本的图像字幕。HAM 旨在捕捉 OCR 标记的不同模态之间的一致性,并提供上下文感知的场景文本表示,从而生成准确的图像标题。据我们所知,我们是第一个将异质关注机制应用于探索基于文本的图像字幕的 OCR 标记一致性的公司。通过计算异质相似性,我们以交互方式增强了 OCR 中视觉信息和文本信息之间的一致性。我们在 TextCaps 数据集上进行了实验。在相同的设置下,结果表明我们的模型与先进的方法相比取得了具有竞争力的性能,而消融研究表明我们的框架在所有指标上都增强了原始模型。
{"title":"Exploring coherence from heterogeneous representations for OCR image captioning","authors":"Yao Zhang, Zijie Song, Zhenzhen Hu","doi":"10.1007/s00530-024-01470-1","DOIUrl":"https://doi.org/10.1007/s00530-024-01470-1","url":null,"abstract":"<p>Text-based image captioning is an important task, aiming to generate descriptions based on reading and reasoning the scene texts in images. Text-based image contains both textual and visual information, which is difficult to be described comprehensively. Recent works fail to adequately model the relationship between features of different modalities and fine-grained alignment. Due to the multimodal characteristics of scene texts, the representations of text usually come from multiple encoders of visual and textual, leading to heterogeneous features. Though lots of works have paid attention to fuse features from different sources, they ignore the direct correlation between heterogeneous features, and the coherence in scene text has not been fully exploited. In this paper, we propose Heterogeneous Attention Module (HAM) to enhance the cross-modal representations of OCR tokens and devote it to text-based image captioning. The HAM is designed to capture the coherence between different modalities of OCR tokens and provide context-aware scene text representations to generate accurate image captions. To the best of our knowledge, we are the first to apply the heterogeneous attention mechanism to explore the coherence in OCR tokens for text-based image captioning. By calculating the heterogeneous similarity, we interactively enhance the alignment between visual and textual information in OCR. We conduct the experiments on the TextCaps dataset. Under the same setting, the results show that our model achieves competitive performances compared with the advanced methods and ablation study demonstrates that our framework enhances the original model in all metrics.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"49 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive B-spline curve fitting with minimal control points using an improved sparrow search algorithm for geometric modeling of aero-engine blades 使用改进的麻雀搜索算法,以最小控制点进行自适应 B 样条曲线拟合,用于航空发动机叶片的几何建模
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-30 DOI: 10.1007/s00530-024-01452-3
Chang Su, Yong Han, Suihao Lu, Dongsheng Jiang

In Industry 4.0 and advanced manufacturing, producing high-precision, complex products such as aero-engine blades involves sophisticated processes. Digital twin technology enables the creation of high-precision, real-time 3D models, optimizing manufacturing processes and improving product qualification rates. Establishing geometric models is crucial for effective digital twins. Traditional methods often fail to meet precision and efficiency demands. This paper proposes a fitting method based on an improved sparrow search algorithm (SSA) for high-precision curve fitting with minimal control points. This enhances modeling precision and efficiency, creating models suitable for digital twin environments and improving machining qualification rates. The SSA’s position update function is enhanced, and an internal node vector update range prevents premature convergence and improves global search capabilities. Through automatic iterations, optimal control points are calculated using the least squares method. Fitness values, based on local and global errors, are iteratively calculated to achieve target accuracy. Validation with aero-engine blade data showed fitting accuracies of 1e−3 mm and 1e−5 mm. Efficiency in searching for minimal control points improved by 34.7%–49.6% compared to traditional methods. This SSA-based fitting method significantly advances geometric modeling precision and efficiency, addressing modern manufacturing challenges with high-quality, real-time production capabilities.

在工业 4.0 和先进制造业中,生产航空发动机叶片等高精度复杂产品需要复杂的工艺流程。数字孪生技术可以创建高精度、实时的三维模型,优化制造流程,提高产品合格率。建立几何模型对于有效的数字孪生至关重要。传统方法往往无法满足精度和效率要求。本文提出了一种基于改进的麻雀搜索算法(SSA)的拟合方法,以最小的控制点实现高精度曲线拟合。这可以提高建模精度和效率,创建适合数字孪生环境的模型,并提高加工合格率。SSA 的位置更新功能得到了增强,内部节点矢量更新范围可防止过早收敛并提高全局搜索能力。通过自动迭代,使用最小二乘法计算出最佳控制点。根据局部和全局误差迭代计算适度值,以达到目标精度。使用航空发动机叶片数据进行的验证显示,拟合精度分别为 1e-3 毫米和 1e-5 毫米。与传统方法相比,搜索最小控制点的效率提高了 34.7% 至 49.6%。这种基于 SSA 的拟合方法大大提高了几何建模的精度和效率,以高质量的实时生产能力应对现代制造挑战。
{"title":"Adaptive B-spline curve fitting with minimal control points using an improved sparrow search algorithm for geometric modeling of aero-engine blades","authors":"Chang Su, Yong Han, Suihao Lu, Dongsheng Jiang","doi":"10.1007/s00530-024-01452-3","DOIUrl":"https://doi.org/10.1007/s00530-024-01452-3","url":null,"abstract":"<p>In Industry 4.0 and advanced manufacturing, producing high-precision, complex products such as aero-engine blades involves sophisticated processes. Digital twin technology enables the creation of high-precision, real-time 3D models, optimizing manufacturing processes and improving product qualification rates. Establishing geometric models is crucial for effective digital twins. Traditional methods often fail to meet precision and efficiency demands. This paper proposes a fitting method based on an improved sparrow search algorithm (SSA) for high-precision curve fitting with minimal control points. This enhances modeling precision and efficiency, creating models suitable for digital twin environments and improving machining qualification rates. The SSA’s position update function is enhanced, and an internal node vector update range prevents premature convergence and improves global search capabilities. Through automatic iterations, optimal control points are calculated using the least squares method. Fitness values, based on local and global errors, are iteratively calculated to achieve target accuracy. Validation with aero-engine blade data showed fitting accuracies of 1e−3 mm and 1e−5 mm. Efficiency in searching for minimal control points improved by 34.7%–49.6% compared to traditional methods. This SSA-based fitting method significantly advances geometric modeling precision and efficiency, addressing modern manufacturing challenges with high-quality, real-time production capabilities.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"11 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HNQA: histogram-based descriptors for fast night-time image quality assessment HNQA:基于直方图的描述符,用于快速夜间图像质量评估
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-30 DOI: 10.1007/s00530-024-01440-7
Maryam Karimi, Mansour Nejati

Taking high quality images at night is a challenging issue for many applications. Therefore, assessing the quality of night-time images (NTIs) is a significant area of research. Since there is no reference image for such images, night-time image quality assessment (NTQA) must be performed blindly. Although the field of blind quality assessment of natural images has gained significant popularity over the past decade, the quality assessment of NTIs usually confront complex distortions such as contrast loss, chroma noise, color desaturation, and detail blur, that have been less investigated. In this paper, a blind night-time image quality evaluation model is proposed by generating innovative quality-aware local feature maps, including detail exposedness, color saturation, sharpness, contrast, and naturalness. In the next step, these maps are compressed and converted into global feature representations using histograms. These feature histograms are used to learn a Gaussian process regression (GPR) quality prediction model. The suggested BIQA approach for night images undergoes a comprehensive evaluation on a standard night image database. The results of the experiments demonstrate the superior prediction performance of the proposed BIQA method for night images compared to other advanced BIQA methods despite its simplicity of implementation and execution speed. Hence, it is readily applicable in real-time scenarios.

在许多应用中,夜间拍摄高质量图像都是一个具有挑战性的问题。因此,评估夜间图像(NTI)的质量是一个重要的研究领域。由于此类图像没有参考图像,因此夜间图像质量评估(NTQA)必须以盲法进行。虽然自然图像的盲质量评估领域在过去十年中得到了极大的普及,但 NTI 的质量评估通常会面临复杂的失真问题,如对比度损失、色度噪声、色彩失饱和以及细节模糊等,对这些问题的研究较少。本文通过生成创新的质量感知局部特征图,包括细节曝光度、色彩饱和度、清晰度、对比度和自然度,提出了一种夜间图像质量盲评估模型。下一步,利用直方图将这些图压缩并转换为全局特征表示。这些特征直方图用于学习高斯过程回归(GPR)质量预测模型。建议的夜间图像 BIQA 方法在标准夜间图像数据库上进行了全面评估。实验结果表明,与其他先进的 BIQA 方法相比,所建议的夜间图像 BIQA 方法尽管实施简单、执行速度快,但预测性能更优越。因此,该方法可随时应用于实时场景。
{"title":"HNQA: histogram-based descriptors for fast night-time image quality assessment","authors":"Maryam Karimi, Mansour Nejati","doi":"10.1007/s00530-024-01440-7","DOIUrl":"https://doi.org/10.1007/s00530-024-01440-7","url":null,"abstract":"<p>Taking high quality images at night is a challenging issue for many applications. Therefore, assessing the quality of night-time images (NTIs) is a significant area of research. Since there is no reference image for such images, night-time image quality assessment (NTQA) must be performed blindly. Although the field of blind quality assessment of natural images has gained significant popularity over the past decade, the quality assessment of NTIs usually confront complex distortions such as contrast loss, chroma noise, color desaturation, and detail blur, that have been less investigated. In this paper, a blind night-time image quality evaluation model is proposed by generating innovative quality-aware local feature maps, including detail exposedness, color saturation, sharpness, contrast, and naturalness. In the next step, these maps are compressed and converted into global feature representations using histograms. These feature histograms are used to learn a Gaussian process regression (GPR) quality prediction model. The suggested BIQA approach for night images undergoes a comprehensive evaluation on a standard night image database. The results of the experiments demonstrate the superior prediction performance of the proposed BIQA method for night images compared to other advanced BIQA methods despite its simplicity of implementation and execution speed. Hence, it is readily applicable in real-time scenarios.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"2 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wacml: based on graph neural network for imbalanced node classification algorithm Wacml:基于图神经网络的不平衡节点分类算法
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-30 DOI: 10.1007/s00530-024-01454-1
Junfeng Wang, Jiayue Yang, Lidun

The presence of a large number of robot accounts on social media has led to negative social impacts. In most cases, the distribution of robot accounts and real human accounts is imbalanced, resulting in insufficient representativeness and poor performance of a few types of samples. Graph neural networks can effectively utilize user interaction and are widely used to process graph structure data, achieving good performance in robot detection. However, previous robot detection methods based on GNN mostly considered the impact of class imbalance. However, in graph-structured data, the imbalance caused by differences in the position and structure of labeled nodes makes the processing results of GNN prone to bias toward larger categories. Due to the lack of consideration for the unique connectivity issues of the graph structure, the classification performance of nodes is not ideal. Therefore, in response to the shortcomings of existing schemes, this paper proposes a class imbalanced node classification algorithm based on minority weighting and abnormal connectivity margin loss, which extends the traditional imbalanced classification idea in the field of machine learning to graph-structured data and jointly handles the problem of quantity imbalance and graph-structured abnormal connectivity to improve GNN’s perception of connection anomalies. In the node feature aggregation stage, weighted aggregation is applied to minority classes. In the oversampling stage, the SMOTE algorithm is used to process imbalanced data, while considering node representation and topology structure. Simultaneously training an edge generator to model relationship information, combined with abnormal connectivity margin loss, to enhance the model’s learning of connectivity information, greatly improving the quality of the edge generator. Finally, we evaluated a publicly available dataset, and the experimental results showed that it achieved good results in classifying imbalanced nodes.

社交媒体上大量机器人账户的出现造成了负面的社会影响。在大多数情况下,机器人账号和真人账号的分布不平衡,导致代表性不足,少数类型样本的性能较差。图神经网络能有效利用用户互动,被广泛用于处理图结构数据,在机器人检测中取得了良好的性能。然而,以往基于图神经网络的机器人检测方法大多考虑了类不平衡的影响。然而,在图结构数据中,由于标记节点的位置和结构不同而导致的不平衡,使得 GNN 的处理结果容易偏向较大的类别。由于没有考虑图结构特有的连接性问题,节点的分类性能并不理想。因此,针对现有方案的不足,本文提出了一种基于少数加权和异常连通性边际损失的类不平衡节点分类算法,将机器学习领域传统的不平衡分类思想扩展到图结构数据,共同处理数量不平衡和图结构异常连通性问题,提高 GNN 对连接异常的感知能力。在节点特征聚合阶段,对少数类进行加权聚合。在超采样阶段,使用 SMOTE 算法处理不平衡数据,同时考虑节点表示和拓扑结构。同时训练边缘生成器对关系信息进行建模,结合异常连通性边际损失,加强模型对连通性信息的学习,大大提高了边缘生成器的质量。最后,我们对一个公开的数据集进行了评估,实验结果表明它在分类不平衡节点方面取得了良好的效果。
{"title":"Wacml: based on graph neural network for imbalanced node classification algorithm","authors":"Junfeng Wang, Jiayue Yang, Lidun","doi":"10.1007/s00530-024-01454-1","DOIUrl":"https://doi.org/10.1007/s00530-024-01454-1","url":null,"abstract":"<p>The presence of a large number of robot accounts on social media has led to negative social impacts. In most cases, the distribution of robot accounts and real human accounts is imbalanced, resulting in insufficient representativeness and poor performance of a few types of samples. Graph neural networks can effectively utilize user interaction and are widely used to process graph structure data, achieving good performance in robot detection. However, previous robot detection methods based on GNN mostly considered the impact of class imbalance. However, in graph-structured data, the imbalance caused by differences in the position and structure of labeled nodes makes the processing results of GNN prone to bias toward larger categories. Due to the lack of consideration for the unique connectivity issues of the graph structure, the classification performance of nodes is not ideal. Therefore, in response to the shortcomings of existing schemes, this paper proposes a class imbalanced node classification algorithm based on minority weighting and abnormal connectivity margin loss, which extends the traditional imbalanced classification idea in the field of machine learning to graph-structured data and jointly handles the problem of quantity imbalance and graph-structured abnormal connectivity to improve GNN’s perception of connection anomalies. In the node feature aggregation stage, weighted aggregation is applied to minority classes. In the oversampling stage, the SMOTE algorithm is used to process imbalanced data, while considering node representation and topology structure. Simultaneously training an edge generator to model relationship information, combined with abnormal connectivity margin loss, to enhance the model’s learning of connectivity information, greatly improving the quality of the edge generator. Finally, we evaluated a publicly available dataset, and the experimental results showed that it achieved good results in classifying imbalanced nodes.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"128 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contour-assistance-based video matting localization 基于轮廓辅助的视频消隐定位
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-30 DOI: 10.1007/s00530-024-01456-z
Zhu Wenyi, Ding Xiangling, Zhang Chao, Deng Yingqian, Zhao Yulin

Video matting is a technique used to replace foreground objects in video frames by predicting their alpha matte. Originally developed for film special effects, advertisements, and live streaming, video matting can also be exploited for malicious tampering, leaving imperceptible traces. This highlights the need for effective forensic techniques to detect such tampering. Current research in video matting forensics is limited, largely focusing on frame-by-frame analysis, which fails to account for the temporal characteristics of videos and thus falls short in accurately localizing tampered regions. In this paper, we address this gap by leveraging the entire video sequence to improve tampering detection. We propose a two-branch network that integrates contour information of tampered objects into the forgery localization process, enhancing the extraction of tampering traces and contour features. Additionally, we introduce a tamper contour detection module and a feature enhancement module to refine tampered region identification. Extensive experiments conducted on both overt and synthetic tampering datasets demonstrate that our method effectively locates tampered regions, outperforming existing video forensics techniques.

视频消隐是一种通过预测阿尔法消隐值来替换视频帧中前景物体的技术。视频消隐最初是为电影特效、广告和实时流媒体开发的,但也可被用于恶意篡改,留下不易察觉的痕迹。这就需要有效的取证技术来检测此类篡改行为。目前在视频消隐取证方面的研究还很有限,主要集中在逐帧分析上,这种分析无法考虑视频的时间特性,因此无法准确定位被篡改的区域。在本文中,我们利用整个视频序列来改进篡改检测,从而弥补了这一不足。我们提出了一种双分支网络,将被篡改对象的轮廓信息整合到伪造定位过程中,从而增强了对篡改痕迹和轮廓特征的提取。此外,我们还引入了篡改轮廓检测模块和特征增强模块,以完善对篡改区域的识别。在公开和合成篡改数据集上进行的大量实验表明,我们的方法能有效定位篡改区域,优于现有的视频取证技术。
{"title":"Contour-assistance-based video matting localization","authors":"Zhu Wenyi, Ding Xiangling, Zhang Chao, Deng Yingqian, Zhao Yulin","doi":"10.1007/s00530-024-01456-z","DOIUrl":"https://doi.org/10.1007/s00530-024-01456-z","url":null,"abstract":"<p>Video matting is a technique used to replace foreground objects in video frames by predicting their alpha matte. Originally developed for film special effects, advertisements, and live streaming, video matting can also be exploited for malicious tampering, leaving imperceptible traces. This highlights the need for effective forensic techniques to detect such tampering. Current research in video matting forensics is limited, largely focusing on frame-by-frame analysis, which fails to account for the temporal characteristics of videos and thus falls short in accurately localizing tampered regions. In this paper, we address this gap by leveraging the entire video sequence to improve tampering detection. We propose a two-branch network that integrates contour information of tampered objects into the forgery localization process, enhancing the extraction of tampering traces and contour features. Additionally, we introduce a tamper contour detection module and a feature enhancement module to refine tampered region identification. Extensive experiments conducted on both overt and synthetic tampering datasets demonstrate that our method effectively locates tampered regions, outperforming existing video forensics techniques.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"58 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakly-supervised temporal action localization using multi-branch attention weighting 利用多分支注意力加权进行弱监督时间动作定位
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-30 DOI: 10.1007/s00530-024-01445-2
Mengxue Liu, Wenjing Li, Fangzhen Ge, Xiangjun Gao

Weakly-supervised temporal action localization aims to train an accurate and robust localization model using only video-level labels. Due to the lack of frame-level temporal annotations, existing weakly-supervised temporal action localization methods typically rely on multiple instance learning mechanisms to localize and classify all action instances in an untrimmed video. However, these methods focus only on the most discriminative regions that contribute to the classification task, neglecting a large number of ambiguous background and context snippets in the video. We believe that these controversial snippets have a significant impact on the localization results. To mitigate this issue, we propose a multi-branch attention weighting network (MAW-Net), which introduces an additional non-action class and integrates a multi-branch attention module to generate action and background attention, respectively. In addition, considering the correlation among context, action, and background, we use the difference of action and background attention to construct context attention. Finally, based on these three types of attention values, we obtain three new class activation sequences that distinguish action, background, and context. This enables our model to effectively remove background and context snippets in the localization results. Extensive experiments were performed on the THUMOS-14 and Activitynet1.3 datasets. The experimental results show that our method is superior to other state-of-the-art methods, and its performance is comparable to those of fully-supervised approaches.

弱监督时态动作定位旨在仅使用视频级标签来训练一个准确而稳健的定位模型。由于缺乏帧级时空注释,现有的弱监督时空动作定位方法通常依赖多实例学习机制来定位和分类未剪辑视频中的所有动作实例。然而,这些方法只关注对分类任务最有帮助的区域,而忽略了视频中大量模糊的背景和上下文片段。我们认为,这些有争议的片段会对定位结果产生重大影响。为了缓解这一问题,我们提出了一种多分支注意力加权网络(MAW-Net),它引入了一个额外的非动作类,并集成了一个多分支注意力模块,以分别产生动作和背景注意力。此外,考虑到上下文、动作和背景之间的相关性,我们利用动作注意和背景注意的差异来构建上下文注意。最后,基于这三种注意力值,我们得到了三种新的类别激活序列,它们可以区分动作、背景和上下文。这样,我们的模型就能有效去除定位结果中的背景和上下文片段。我们在 THUMOS-14 和 Activitynet1.3 数据集上进行了广泛的实验。实验结果表明,我们的方法优于其他最先进的方法,其性能可与完全监督方法相媲美。
{"title":"Weakly-supervised temporal action localization using multi-branch attention weighting","authors":"Mengxue Liu, Wenjing Li, Fangzhen Ge, Xiangjun Gao","doi":"10.1007/s00530-024-01445-2","DOIUrl":"https://doi.org/10.1007/s00530-024-01445-2","url":null,"abstract":"<p>Weakly-supervised temporal action localization aims to train an accurate and robust localization model using only video-level labels. Due to the lack of frame-level temporal annotations, existing weakly-supervised temporal action localization methods typically rely on multiple instance learning mechanisms to localize and classify all action instances in an untrimmed video. However, these methods focus only on the most discriminative regions that contribute to the classification task, neglecting a large number of ambiguous background and context snippets in the video. We believe that these controversial snippets have a significant impact on the localization results. To mitigate this issue, we propose a multi-branch attention weighting network (MAW-Net), which introduces an additional non-action class and integrates a multi-branch attention module to generate action and background attention, respectively. In addition, considering the correlation among context, action, and background, we use the difference of action and background attention to construct context attention. Finally, based on these three types of attention values, we obtain three new class activation sequences that distinguish action, background, and context. This enables our model to effectively remove background and context snippets in the localization results. Extensive experiments were performed on the THUMOS-14 and Activitynet1.3 datasets. The experimental results show that our method is superior to other state-of-the-art methods, and its performance is comparable to those of fully-supervised approaches.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"62 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cvstgan: A Controllable Generative Adversarial Network for Video Style Transfer of Chinese Painting Cvstgan:用于中国画视频风格转换的可控生成式对抗网络
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-30 DOI: 10.1007/s00530-024-01457-y
Zunfu Wang, Fang Liu, Changjuan Ran

Style transfer aims to apply the stylistic characteristics of a reference image onto a target image or video. Existing studies on style transfer suffer from either fixed style without adjustability or unclear stylistic patterns in output results. Moreover, concerning video style transfer, issues such as discontinuity in content and time, flickering, and local distortions are common. Current research on artistic image style transfer mainly focuses on Western painting. In view of the differences between Eastern and Western painting, the existing methods cannot be directly applied to the style transfer of Chinese painting. To address the aforementioned issues, we propose a controllable style transfer method based on generative adversarial networks. The method operates directly in the feature space of style and content domains, synthesizing target images by merging style features and content features. To enhance the output stylization effect of Chinese painting, we incorporate stroke constraints and ink diffusion constraints to improve the visual quality. To mitigate issues such as blank spaces, highlights, and color confusion resulting in flickering and noise in Chinese painting style videos, we propose a flow-based stylized video optimization strategy to ensure consistency in content and time. Qualitative and quantitative experimental results show that our method outperforms state-of-the-art style transfer methods.

风格转换的目的是将参考图像的风格特征应用到目标图像或视频上。现有的风格转换研究要么存在风格固定、不可调整的问题,要么输出结果的风格模式不清晰。此外,在视频风格转换方面,内容和时间的不连续性、闪烁和局部失真等问题也很常见。目前关于艺术图像风格转换的研究主要集中在西方绘画领域。鉴于东西方绘画的差异,现有方法无法直接应用于中国画的风格转换。针对上述问题,我们提出了一种基于生成对抗网络的可控风格转换方法。该方法直接在风格域和内容域的特征空间中运行,通过合并风格特征和内容特征来合成目标图像。为了增强中国画的输出风格化效果,我们加入了笔触约束和墨色扩散约束,以提高视觉质量。为了缓解中国画风格视频中的空白、高光和色彩混淆导致的闪烁和噪点等问题,我们提出了基于流程的风格化视频优化策略,以确保内容和时间的一致性。定性和定量实验结果表明,我们的方法优于最先进的风格转换方法。
{"title":"Cvstgan: A Controllable Generative Adversarial Network for Video Style Transfer of Chinese Painting","authors":"Zunfu Wang, Fang Liu, Changjuan Ran","doi":"10.1007/s00530-024-01457-y","DOIUrl":"https://doi.org/10.1007/s00530-024-01457-y","url":null,"abstract":"<p>Style transfer aims to apply the stylistic characteristics of a reference image onto a target image or video. Existing studies on style transfer suffer from either fixed style without adjustability or unclear stylistic patterns in output results. Moreover, concerning video style transfer, issues such as discontinuity in content and time, flickering, and local distortions are common. Current research on artistic image style transfer mainly focuses on Western painting. In view of the differences between Eastern and Western painting, the existing methods cannot be directly applied to the style transfer of Chinese painting. To address the aforementioned issues, we propose a controllable style transfer method based on generative adversarial networks. The method operates directly in the feature space of style and content domains, synthesizing target images by merging style features and content features. To enhance the output stylization effect of Chinese painting, we incorporate stroke constraints and ink diffusion constraints to improve the visual quality. To mitigate issues such as blank spaces, highlights, and color confusion resulting in flickering and noise in Chinese painting style videos, we propose a flow-based stylized video optimization strategy to ensure consistency in content and time. Qualitative and quantitative experimental results show that our method outperforms state-of-the-art style transfer methods.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"17 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BSP-Net: automatic skin lesion segmentation improved by boundary enhancement and progressive decoding methods BSP-Net:通过边界增强和渐进解码方法改进自动皮损分割
IF 3.9 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-29 DOI: 10.1007/s00530-024-01453-2
Chengyun Ma, Qimeng Yang, Shengwei Tian, Long Yu, Shirong Yu

Automatic skin lesion segmentation from dermoscopy images is of great significance in the early treatment of skin cancers, which is yet challenging even for dermatologists due to the inherent issues, i.e., considerable size, shape and color variation, and ambiguous boundaries. In this paper, we propose a network BSP-Net that implements the combination of critical boundary information and segmentation tasks to simultaneously solve the variation and boundary problems in skin lesion segmentation. The architecture of BSP-Net primarily consists of a multi-scale boundary enhancement (MBE) module and a progressive fusion decoder (PD). The MBE module, by deeply extracting boundary information in both multi-axis frequency and multi-scale spatial domains, generates precise boundary key-point prediction maps. This process not only accurately models local boundary information but also effectively retains global contextual information. On the other hand, the PD employs an asymmetric decoding strategy, guiding the generation of refined segmentation results by combining boundary-enhanced features rich in geometric details with global features containing semantic information about lesions. This strategy progressively fuses boundary and semantic information at different levels, effectively enabling high-performance collaboration between cross-level contextual features. To assess the effectiveness of BSP-Net, we conducted extensive experiments on two public datasets (ISIC-2016 &PH2, ISIC-2018) and one private dataset (XJUSKin). BSP-Net achieved Dice coefficients of 90.81, 92.41, and 83.88%, respectively. Additionally, it demonstrated precise boundary delineation with Average Symmetric Surface Distance (ASSD) scores of 7.96, 6.88, and 10.92%, highlighting its strong performance in skin lesion segmentation.

从皮肤镜图像中自动分割皮损对皮肤癌的早期治疗具有重要意义,但由于其固有的问题,即相当大的大小、形状和颜色变化以及模糊的边界,即使对皮肤科医生来说也具有挑战性。在本文中,我们提出了一种网络 BSP-Net,它实现了关键边界信息与分割任务的结合,可同时解决皮损分割中的变异和边界问题。BSP-Net 的结构主要由多尺度边界增强(MBE)模块和渐进融合解码器(PD)组成。MBE 模块通过深入提取多轴频率域和多尺度空间域的边界信息,生成精确的边界关键点预测图。这一过程不仅能准确模拟局部边界信息,还能有效保留全局上下文信息。另一方面,PD 采用非对称解码策略,通过将富含几何细节的边界增强特征与包含病变语义信息的全局特征相结合,指导生成精细的分割结果。这一策略逐步融合了不同层次的边界和语义信息,有效地实现了跨层次上下文特征之间的高性能协作。为了评估 BSP-Net 的有效性,我们在两个公共数据集(ISIC-2016 &PH2, ISIC-2018)和一个私有数据集(XJUSKin)上进行了大量实验。BSP-Net 的骰子系数分别达到了 90.81%、92.41% 和 83.88%。此外,它还实现了精确的边界划分,平均对称表面距离(ASSD)得分分别为 7.96%、6.88% 和 10.92%,突出了其在皮损分割方面的强大性能。
{"title":"BSP-Net: automatic skin lesion segmentation improved by boundary enhancement and progressive decoding methods","authors":"Chengyun Ma, Qimeng Yang, Shengwei Tian, Long Yu, Shirong Yu","doi":"10.1007/s00530-024-01453-2","DOIUrl":"https://doi.org/10.1007/s00530-024-01453-2","url":null,"abstract":"<p>Automatic skin lesion segmentation from dermoscopy images is of great significance in the early treatment of skin cancers, which is yet challenging even for dermatologists due to the inherent issues, i.e., considerable size, shape and color variation, and ambiguous boundaries. In this paper, we propose a network BSP-Net that implements the combination of critical boundary information and segmentation tasks to simultaneously solve the variation and boundary problems in skin lesion segmentation. The architecture of BSP-Net primarily consists of a multi-scale boundary enhancement (MBE) module and a progressive fusion decoder (PD). The MBE module, by deeply extracting boundary information in both multi-axis frequency and multi-scale spatial domains, generates precise boundary key-point prediction maps. This process not only accurately models local boundary information but also effectively retains global contextual information. On the other hand, the PD employs an asymmetric decoding strategy, guiding the generation of refined segmentation results by combining boundary-enhanced features rich in geometric details with global features containing semantic information about lesions. This strategy progressively fuses boundary and semantic information at different levels, effectively enabling high-performance collaboration between cross-level contextual features. To assess the effectiveness of BSP-Net, we conducted extensive experiments on two public datasets (ISIC-2016 &amp;PH2, ISIC-2018) and one private dataset (XJUSKin). BSP-Net achieved Dice coefficients of 90.81, 92.41, and 83.88%, respectively. Additionally, it demonstrated precise boundary delineation with Average Symmetric Surface Distance (ASSD) scores of 7.96, 6.88, and 10.92%, highlighting its strong performance in skin lesion segmentation.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"28 1","pages":""},"PeriodicalIF":3.9,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142210650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Multimedia Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1