首页 > 最新文献

Signal Processing-Image Communication最新文献

英文 中文
SES-ReNet: Lightweight deep learning model for human detection in hazy weather conditions SES-ReNet:用于雾霾天气条件下人体检测的轻量级深度学习模型
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-30 DOI: 10.1016/j.image.2024.117223
Yassine Bouafia , Mohand Saïd Allili , Loucif Hebbache , Larbi Guezouli
Accurate detection of people in outdoor scenes plays an essential role in improving personal safety and security. However, existing human detection algorithms face significant challenges when visibility is reduced and human appearance is degraded, particularly in hazy weather conditions. To address this problem, we present a novel lightweight model based on the RetinaNet detection architecture. The model incorporates a lightweight backbone feature extractor, a dehazing functionality based on knowledge distillation (KD), and a multi-scale attention mechanism based on the Squeeze and Excitation (SE) principle. KD is achieved from a larger network trained on unhazed clear images, whereas attention is incorporated at low-level and high-level features of the network. Experimental results have shown remarkable performance, outperforming state-of-the-art methods while running at 22 FPS. The combination of high accuracy and real-time capabilities makes our approach a promising solution for effective human detection in challenging weather conditions and suitable for real-time applications.
准确检测室外场景中的人员对改善人身安全和安保起着至关重要的作用。然而,当能见度降低、人的外观退化时,尤其是在雾霾天气条件下,现有的人体检测算法面临着巨大挑战。为解决这一问题,我们提出了一种基于 RetinaNet 检测架构的新型轻量级模型。该模型包含一个轻量级骨干特征提取器、一个基于知识提炼(KD)的去毛刺功能和一个基于挤压和激励(SE)原理的多尺度关注机制。知识蒸馏是通过在未去毛刺的清晰图像上训练的大型网络来实现的,而注意力则被纳入网络的低级和高级特征中。实验结果表明,该方法性能卓越,在以 22 FPS 的速度运行时,性能优于最先进的方法。高准确度和实时性的结合使我们的方法成为在具有挑战性的天气条件下进行有效人体检测的一种有前途的解决方案,并适用于实时应用。
{"title":"SES-ReNet: Lightweight deep learning model for human detection in hazy weather conditions","authors":"Yassine Bouafia ,&nbsp;Mohand Saïd Allili ,&nbsp;Loucif Hebbache ,&nbsp;Larbi Guezouli","doi":"10.1016/j.image.2024.117223","DOIUrl":"10.1016/j.image.2024.117223","url":null,"abstract":"<div><div>Accurate detection of people in outdoor scenes plays an essential role in improving personal safety and security. However, existing human detection algorithms face significant challenges when visibility is reduced and human appearance is degraded, particularly in hazy weather conditions. To address this problem, we present a novel lightweight model based on the RetinaNet detection architecture. The model incorporates a lightweight backbone feature extractor, a dehazing functionality based on knowledge distillation (KD), and a multi-scale attention mechanism based on the Squeeze and Excitation (SE) principle. KD is achieved from a larger network trained on unhazed clear images, whereas attention is incorporated at low-level and high-level features of the network. Experimental results have shown remarkable performance, outperforming state-of-the-art methods while running at 22 FPS. The combination of high accuracy and real-time capabilities makes our approach a promising solution for effective human detection in challenging weather conditions and suitable for real-time applications.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117223"},"PeriodicalIF":3.4,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HOI-V: One-stage human-object interaction detection based on multi-feature fusion in videos HOI-V:基于视频多特征融合的单阶段人-物互动检测
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-29 DOI: 10.1016/j.image.2024.117224
Dongzhou Gu , Kaihua Huang , Shiwei Ma , Jiang Liu
Effective detection of Human-Object Interaction (HOI) is important for machine understanding of real-world scenarios. Nowadays, image-based HOI detection has been abundantly investigated, and recent one-stage methods strike a balance between accuracy and efficiency. However, it is difficult to predict temporal-aware interaction actions from static images since limited temporal context information is introduced. Meanwhile, due to the lack of early large-scale video HOI datasets and the high computational cost of spatial-temporal HOI model training, recent exploratory studies mostly follow a two-stage paradigm, but independent object detection and interaction recognition still suffer from computational redundancy and independent optimization. Therefore, inspired by the one-stage interaction point detection framework, a one-stage spatial-temporal HOI detection baseline is proposed in this paper, in which the short-term local motion features and long-term temporal context features are obtained by the proposed temporal differential excitation module (TDEM) and DLA-TSM backbone. Complementary visual features between multiple clips are then extracted by multi-feature fusion and fed into the parallel detection branches. Finally, a video dataset containing only actions with reduced data size (HOI-V) is constructed to motivate further research on end-to-end video HOI detection. Extensive experiments are also conducted to verify the validity of our proposed baseline.
有效的人-物交互(HOI)检测对于机器理解真实世界场景非常重要。目前,基于图像的 HOI 检测已得到广泛研究,最近的单阶段方法在准确性和效率之间取得了平衡。然而,由于引入的时间上下文信息有限,因此很难从静态图像中预测时间感知的交互行为。同时,由于早期大规模视频 HOI 数据集的缺乏以及时空 HOI 模型训练的计算成本较高,近年来的探索性研究大多采用两阶段范式,但独立的对象检测和交互识别仍存在计算冗余和独立优化的问题。因此,受单级交互点检测框架的启发,本文提出了单级时空 HOI 检测基线,其中短期局部运动特征和长期时空上下文特征由提出的时差激励模块(TDEM)和 DLA-TSM 骨干模块获得。然后,通过多特征融合提取多个片段之间的互补视觉特征,并将其输入并行检测分支。最后,我们构建了一个视频数据集,其中只包含数据量较小的动作(HOI-V),以激励对端到端视频 HOI 检测的进一步研究。我们还进行了广泛的实验,以验证我们提出的基线的有效性。
{"title":"HOI-V: One-stage human-object interaction detection based on multi-feature fusion in videos","authors":"Dongzhou Gu ,&nbsp;Kaihua Huang ,&nbsp;Shiwei Ma ,&nbsp;Jiang Liu","doi":"10.1016/j.image.2024.117224","DOIUrl":"10.1016/j.image.2024.117224","url":null,"abstract":"<div><div>Effective detection of Human-Object Interaction (HOI) is important for machine understanding of real-world scenarios. Nowadays, image-based HOI detection has been abundantly investigated, and recent one-stage methods strike a balance between accuracy and efficiency. However, it is difficult to predict temporal-aware interaction actions from static images since limited temporal context information is introduced. Meanwhile, due to the lack of early large-scale video HOI datasets and the high computational cost of spatial-temporal HOI model training, recent exploratory studies mostly follow a two-stage paradigm, but independent object detection and interaction recognition still suffer from computational redundancy and independent optimization. Therefore, inspired by the one-stage interaction point detection framework, a one-stage spatial-temporal HOI detection baseline is proposed in this paper, in which the short-term local motion features and long-term temporal context features are obtained by the proposed temporal differential excitation module (TDEM) and DLA-TSM backbone. Complementary visual features between multiple clips are then extracted by multi-feature fusion and fed into the parallel detection branches. Finally, a video dataset containing only actions with reduced data size (HOI-V) is constructed to motivate further research on end-to-end video HOI detection. Extensive experiments are also conducted to verify the validity of our proposed baseline.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117224"},"PeriodicalIF":3.4,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142652491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High efficiency deep image compression via channel-wise scale adaptive latent representation learning 通过信道尺度自适应潜表征学习实现高效深度图像压缩
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-28 DOI: 10.1016/j.image.2024.117227
Chenhao Wu, Qingbo Wu, King Ngi Ngan, Hongliang Li, Fanman Meng, Linfeng Xu
Recent learning based neural image compression methods have achieved impressive rate–distortion (RD) performance via the sophisticated context entropy model, which performs well in capturing the spatial correlations of latent features. However, due to the dependency on the adjacent or distant decoded features, existing methods require an inefficient serial processing structure, which significantly limits its practicability. Instead of pursuing computationally expensive entropy estimation, we propose to reduce the spatial redundancy via the channel-wise scale adaptive latent representation learning, whose entropy coding is spatially context-free and parallelizable. Specifically, the proposed encoder adaptively determines the scale of the latent features via a learnable binary mask, which is optimized with the RD cost. In this way, lower-scale latent representation will be allocated to the channels with higher spatial redundancy, which consumes fewer bits and vice versa. The downscaled latent features could be well recovered with a lightweight inter-channel upconversion module in the decoder. To compensate for the entropy estimation performance degradation, we further develop an inter-scale hyperprior entropy model, which supports the high efficiency parallel encoding/decoding within each scale of the latent features. Extensive experiments are conducted to illustrate the efficacy of the proposed method. Our method achieves bitrate savings of 18.23%, 19.36%, and 27.04% over HEVC Intra, along with decoding speeds that are 46 times, 48 times, and 51 times faster than the baseline method on the Kodak, Tecnick, and CLIC datasets, respectively.
近期基于学习的神经图像压缩方法通过复杂的上下文熵模型实现了令人印象深刻的速率-失真(RD)性能,该模型在捕捉潜在特征的空间相关性方面表现出色。然而,由于依赖于相邻或相距较远的解码特征,现有方法需要低效的串行处理结构,这大大限制了其实用性。我们建议通过信道尺度自适应潜表征学习来减少空间冗余,而不是追求计算成本高昂的熵估计,其熵编码是无空间上下文且可并行处理的。具体来说,建议的编码器通过可学习的二进制掩码自适应地确定潜在特征的尺度,并根据 RD 成本对其进行优化。这样,较低尺度的潜在表示将分配给空间冗余度较高的信道,从而减少比特消耗,反之亦然。解码器中的轻量级信道间上变频模块可以很好地恢复降尺度潜特征。为了弥补熵估计性能的下降,我们进一步开发了一种跨尺度超优先熵模型,它支持在潜特征的每个尺度内进行高效的并行编码/解码。我们进行了大量实验,以说明所提方法的功效。在柯达、Tecnick 和 CLIC 数据集上,我们的方法比 HEVC Intra 分别节省了 18.23%、19.36% 和 27.04% 的比特率,解码速度比基准方法分别快 46 倍、48 倍和 51 倍。
{"title":"High efficiency deep image compression via channel-wise scale adaptive latent representation learning","authors":"Chenhao Wu,&nbsp;Qingbo Wu,&nbsp;King Ngi Ngan,&nbsp;Hongliang Li,&nbsp;Fanman Meng,&nbsp;Linfeng Xu","doi":"10.1016/j.image.2024.117227","DOIUrl":"10.1016/j.image.2024.117227","url":null,"abstract":"<div><div>Recent learning based neural image compression methods have achieved impressive rate–distortion (RD) performance via the sophisticated context entropy model, which performs well in capturing the spatial correlations of latent features. However, due to the dependency on the adjacent or distant decoded features, existing methods require an inefficient serial processing structure, which significantly limits its practicability. Instead of pursuing computationally expensive entropy estimation, we propose to reduce the spatial redundancy via the channel-wise scale adaptive latent representation learning, whose entropy coding is spatially context-free and parallelizable. Specifically, the proposed encoder adaptively determines the scale of the latent features via a learnable binary mask, which is optimized with the RD cost. In this way, lower-scale latent representation will be allocated to the channels with higher spatial redundancy, which consumes fewer bits and vice versa. The downscaled latent features could be well recovered with a lightweight inter-channel upconversion module in the decoder. To compensate for the entropy estimation performance degradation, we further develop an inter-scale hyperprior entropy model, which supports the high efficiency parallel encoding/decoding within each scale of the latent features. Extensive experiments are conducted to illustrate the efficacy of the proposed method. Our method achieves bitrate savings of 18.23%, 19.36%, and 27.04% over HEVC Intra, along with decoding speeds that are 46 times, 48 times, and 51 times faster than the baseline method on the Kodak, Tecnick, and CLIC datasets, respectively.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117227"},"PeriodicalIF":3.4,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Text in the dark: Extremely low-light text image enhancement 黑暗中的文字:极低照度下的文字图像增强
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-28 DOI: 10.1016/j.image.2024.117222
Che-Tsung Lin , Chun Chet Ng , Zhi Qin Tan , Wan Jun Nah , Xinyu Wang , Jie Long Kew , Pohao Hsu , Shang Hong Lai , Chee Seng Chan , Christopher Zach
Extremely low-light text images pose significant challenges for scene text detection. Existing methods enhance these images using low-light image enhancement techniques before text detection. However, they fail to address the importance of low-level features, which are essential for optimal performance in downstream scene text tasks. Further research is also limited by the scarcity of extremely low-light text datasets. To address these limitations, we propose a novel, text-aware extremely low-light image enhancement framework. Our approach first integrates a Text-Aware Copy-Paste (Text-CP) augmentation method as a preprocessing step, followed by a dual-encoder–decoder architecture enhanced with Edge-Aware attention modules. We also introduce text detection and edge reconstruction losses to train the model to generate images with higher text visibility. Additionally, we propose a Supervised Deep Curve Estimation (Supervised-DCE) model for synthesizing extremely low-light images, allowing training on publicly available scene text datasets such as IC15. To further advance this domain, we annotated texts in the extremely low-light See In the Dark (SID) and ordinary LOw-Light (LOL) datasets. The proposed framework is rigorously tested against various traditional and deep learning-based methods on the newly labeled SID-Sony-Text, SID-Fuji-Text, LOL-Text, and synthetic extremely low-light IC15 datasets. Our extensive experiments demonstrate notable improvements in both image enhancement and scene text tasks, showcasing the model’s efficacy in text detection under extremely low-light conditions. Code and datasets will be released publicly at https://github.com/chunchet-ng/Text-in-the-Dark.
极低照度下的文本图像给场景文本检测带来了巨大挑战。现有方法在文本检测前使用低照度图像增强技术来增强这些图像。然而,这些方法未能解决低层次特征的重要性问题,而低层次特征对于优化下游场景文本任务的性能至关重要。此外,极低照度文本数据集的缺乏也限制了进一步的研究。为了解决这些局限性,我们提出了一种新颖的、文本感知的极低照度图像增强框架。我们的方法首先集成了文本感知复制-粘贴(Text-CP)增强方法作为预处理步骤,然后采用边缘感知注意模块增强的双编码器-解码器架构。我们还引入了文本检测和边缘重建损失,以训练模型生成具有更高文本可见性的图像。此外,我们还提出了一种用于合成极低光图像的监督深度曲线估计(Supervised-DCE)模型,允许在 IC15 等公开可用的场景文本数据集上进行训练。为了进一步推动这一领域的发展,我们在极低照度的 "See In the Dark"(SID)和普通的 "LOw-Light"(LOL)数据集中对文本进行了注释。在新标注的 SID-Sony-文本、SID-Fuji-文本、LOL-文本和合成的极低照度 IC15 数据集上,我们针对各种传统方法和基于深度学习的方法对所提出的框架进行了严格测试。我们的大量实验表明,该模型在图像增强和场景文本任务方面都有显著改进,展示了该模型在极弱光条件下进行文本检测的功效。代码和数据集将在 https://github.com/chunchet-ng/Text-in-the-Dark 上公开发布。
{"title":"Text in the dark: Extremely low-light text image enhancement","authors":"Che-Tsung Lin ,&nbsp;Chun Chet Ng ,&nbsp;Zhi Qin Tan ,&nbsp;Wan Jun Nah ,&nbsp;Xinyu Wang ,&nbsp;Jie Long Kew ,&nbsp;Pohao Hsu ,&nbsp;Shang Hong Lai ,&nbsp;Chee Seng Chan ,&nbsp;Christopher Zach","doi":"10.1016/j.image.2024.117222","DOIUrl":"10.1016/j.image.2024.117222","url":null,"abstract":"<div><div>Extremely low-light text images pose significant challenges for scene text detection. Existing methods enhance these images using low-light image enhancement techniques before text detection. However, they fail to address the importance of low-level features, which are essential for optimal performance in downstream scene text tasks. Further research is also limited by the scarcity of extremely low-light text datasets. To address these limitations, we propose a novel, text-aware extremely low-light image enhancement framework. Our approach first integrates a Text-Aware Copy-Paste (Text-CP) augmentation method as a preprocessing step, followed by a dual-encoder–decoder architecture enhanced with Edge-Aware attention modules. We also introduce text detection and edge reconstruction losses to train the model to generate images with higher text visibility. Additionally, we propose a Supervised Deep Curve Estimation (Supervised-DCE) model for synthesizing extremely low-light images, allowing training on publicly available scene text datasets such as IC15. To further advance this domain, we annotated texts in the extremely low-light See In the Dark (SID) and ordinary LOw-Light (LOL) datasets. The proposed framework is rigorously tested against various traditional and deep learning-based methods on the newly labeled SID-Sony-Text, SID-Fuji-Text, LOL-Text, and synthetic extremely low-light IC15 datasets. Our extensive experiments demonstrate notable improvements in both image enhancement and scene text tasks, showcasing the model’s efficacy in text detection under extremely low-light conditions. Code and datasets will be released publicly at <span><span>https://github.com/chunchet-ng/Text-in-the-Dark</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117222"},"PeriodicalIF":3.4,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Double supervision for scene text detection and recognition based on BMINet 基于 BMINet 的场景文本检测和识别双重监督
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-26 DOI: 10.1016/j.image.2024.117226
Hanyang Wan, Ruoyun Liu, Li Yu
Scene text detection and recognition currently stand as prominent research areas in computer vision, boasting a broad spectrum of potential applications in fields such as intelligent driving and automated production. Existing mainstream methodologies, however, suffer from notable deficiencies including incomplete text region detection, excessive background noise, and a neglect of simultaneous global information and contextual dependencies. In this study, we introduce BMINet, an innovative scene text detection approach based on boundary fitting, paired with a double-supervised scene text recognition method that incorporates text region correction. The BMINet framework is primarily structured around a boundary fitting module and a multi-scale fusion module. The boundary fitting module samples a specific number of control points equidistantly along the predicted boundary and adjusts their positions to better align the detection box with the text shape. The multi-scale fusion module integrates information from multi-scale feature maps to expand the network’s receptive field. The double-supervised scene text recognition method, incorporating text region correction, integrates the image processing modules for rotating rectangle boxes and binary image segmentation. Additionally, it introduces a correction network to refine text region boundaries. This method integrates recognition techniques based on CTC loss and attention mechanisms, emphasizing texture details and contextual dependencies in text images to enhance network performance through dual supervision. Extensive ablation and comparison experiments confirm the efficacy of the two-stage model in achieving robust detection and recognition outcomes, achieving a recognition accuracy of 80.6% on the Total-Text dataset.
场景文本检测和识别目前是计算机视觉领域的重要研究领域,在智能驾驶和自动化生产等领域有着广泛的潜在应用。然而,现有的主流方法都存在明显的缺陷,包括文本区域检测不完整、背景噪声过大以及忽略全局信息和上下文相关性。在本研究中,我们介绍了基于边界拟合的创新性场景文本检测方法 BMINet,以及结合文本区域校正的双重监督场景文本识别方法。BMINet 框架主要由边界拟合模块和多尺度融合模块组成。边界拟合模块沿着预测的边界等距采样特定数量的控制点,并调整它们的位置,使检测框与文本形状更好地对齐。多尺度融合模块整合来自多尺度特征图的信息,以扩大网络的感受野。结合文本区域校正的双重监督场景文本识别方法集成了旋转矩形框和二值图像分割的图像处理模块。此外,它还引入了一个校正网络来细化文本区域边界。该方法整合了基于 CTC 损失和注意力机制的识别技术,强调文本图像中的纹理细节和上下文依赖关系,通过双重监督提高网络性能。广泛的消融和对比实验证实了两阶段模型在实现稳健检测和识别结果方面的功效,在 Total-Text 数据集上实现了 80.6% 的识别准确率。
{"title":"Double supervision for scene text detection and recognition based on BMINet","authors":"Hanyang Wan,&nbsp;Ruoyun Liu,&nbsp;Li Yu","doi":"10.1016/j.image.2024.117226","DOIUrl":"10.1016/j.image.2024.117226","url":null,"abstract":"<div><div>Scene text detection and recognition currently stand as prominent research areas in computer vision, boasting a broad spectrum of potential applications in fields such as intelligent driving and automated production. Existing mainstream methodologies, however, suffer from notable deficiencies including incomplete text region detection, excessive background noise, and a neglect of simultaneous global information and contextual dependencies. In this study, we introduce BMINet, an innovative scene text detection approach based on boundary fitting, paired with a double-supervised scene text recognition method that incorporates text region correction. The BMINet framework is primarily structured around a boundary fitting module and a multi-scale fusion module. The boundary fitting module samples a specific number of control points equidistantly along the predicted boundary and adjusts their positions to better align the detection box with the text shape. The multi-scale fusion module integrates information from multi-scale feature maps to expand the network’s receptive field. The double-supervised scene text recognition method, incorporating text region correction, integrates the image processing modules for rotating rectangle boxes and binary image segmentation. Additionally, it introduces a correction network to refine text region boundaries. This method integrates recognition techniques based on CTC loss and attention mechanisms, emphasizing texture details and contextual dependencies in text images to enhance network performance through dual supervision. Extensive ablation and comparison experiments confirm the efficacy of the two-stage model in achieving robust detection and recognition outcomes, achieving a recognition accuracy of 80.6% on the Total-Text dataset.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117226"},"PeriodicalIF":3.4,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A new two-stage low-light enhancement network with progressive attention fusion strategy 采用渐进式注意力融合策略的新型两级低照度增强网络
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-26 DOI: 10.1016/j.image.2024.117229
Hegui Zhu , Luyang Wang , Zhan Gao , Yuelin Liu , Qian Zhao
Low-light image enhancement is a very challenging subject in the field of computer vision such as visual surveillance, driving behavior analysis, and medical imaging . It has a large number of degradation problems such as accumulated noise, artifacts, and color distortion. Therefore, how to solve the degradation problems and obtain clear images with high visual quality has become an important issue. It can effectively improve the performance of high-level computer vision tasks. In this study, we propose a new two-stage low-light enhancement network with a progressive attention fusion strategy, and the two hallmarks of this method are the use of global feature fusion (GFF) and local detail restoration (LDR), which can enrich the global content of the image and restore local details. Experimental results on the LOL dataset show that the proposed model can achieve good enhancement effects. Moreover, on the benchmark dataset without reference images, the proposed model also obtains a better NIQE score, which outperforms most existing state-of-the-art methods in both quantitative and qualitative evaluations. All these verify the effectiveness and superiority of the proposed method.
低照度图像增强是视觉监控、驾驶行为分析和医学成像等计算机视觉领域一个极具挑战性的课题。它存在大量的劣化问题,如累积噪声、伪像和色彩失真。因此,如何解决退化问题,获得视觉质量高的清晰图像已成为一个重要问题。它可以有效提高高级计算机视觉任务的性能。在本研究中,我们提出了一种新的两阶段低照度增强网络,采用渐进式注意力融合策略,该方法的两大特点是使用全局特征融合(GFF)和局部细节还原(LDR),既能丰富图像的全局内容,又能还原局部细节。在 LOL 数据集上的实验结果表明,所提出的模型可以达到良好的增强效果。此外,在没有参考图像的基准数据集上,所提出的模型也获得了较好的 NIQE 分数,在定量和定性评估中均优于大多数现有的先进方法。所有这些都验证了所提出方法的有效性和优越性。
{"title":"A new two-stage low-light enhancement network with progressive attention fusion strategy","authors":"Hegui Zhu ,&nbsp;Luyang Wang ,&nbsp;Zhan Gao ,&nbsp;Yuelin Liu ,&nbsp;Qian Zhao","doi":"10.1016/j.image.2024.117229","DOIUrl":"10.1016/j.image.2024.117229","url":null,"abstract":"<div><div>Low-light image enhancement is a very challenging subject in the field of computer vision such as visual surveillance, driving behavior analysis, and medical imaging . It has a large number of degradation problems such as accumulated noise, artifacts, and color distortion. Therefore, how to solve the degradation problems and obtain clear images with high visual quality has become an important issue. It can effectively improve the performance of high-level computer vision tasks. In this study, we propose a new two-stage low-light enhancement network with a progressive attention fusion strategy, and the two hallmarks of this method are the use of global feature fusion (GFF) and local detail restoration (LDR), which can enrich the global content of the image and restore local details. Experimental results on the LOL dataset show that the proposed model can achieve good enhancement effects. Moreover, on the benchmark dataset without reference images, the proposed model also obtains a better NIQE score, which outperforms most existing state-of-the-art methods in both quantitative and qualitative evaluations. All these verify the effectiveness and superiority of the proposed method.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117229"},"PeriodicalIF":3.4,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142579043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Infrared and visible image fusion based on hybrid multi-scale decomposition and adaptive contrast enhancement 基于混合多尺度分解和自适应对比度增强的红外与可见光图像融合
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-22 DOI: 10.1016/j.image.2024.117228
Yueying Luo, Kangjian He, Dan Xu, Hongzhen Shi, Wenxia Yin
Effectively fusing infrared and visible images enhances the visibility of infrared target information while capturing visual details. Balancing the brightness and contrast of the fusion image adequately has posed a significant challenge. Moreover, preserving detailed information in fusion images has been problematic. To address these issues, this paper proposes a fusion algorithm based on multi-scale decomposition and adaptive contrast enhancement. Initially, we present a hybrid multi-scale decomposition method aimed at extracting valuable information comprehensively from the source image. Subsequently, we advance an adaptive base layer optimization approach to regulate the brightness and contrast of the resultant fusion image. Lastly, we design a weight mapping rule grounded in saliency detection to integrate small-scale layers, thereby conserving the edge structure within the fusion outcome. Both qualitative and quantitative experimental results affirm the superiority of the proposed method over 11 state-of-the-art image fusion methods. Our method excels in preserving more texture and achieving higher contrast, which proves advantageous for monitoring tasks.
有效地融合红外图像和可见光图像可以提高红外目标信息的可见度,同时捕捉视觉细节。如何充分平衡融合图像的亮度和对比度是一项重大挑战。此外,在融合图像中保留细节信息也一直是个问题。为了解决这些问题,本文提出了一种基于多尺度分解和自适应对比度增强的融合算法。首先,我们提出了一种混合多尺度分解方法,旨在从源图像中全面提取有价值的信息。随后,我们推进了一种自适应基底层优化方法,以调节融合后图像的亮度和对比度。最后,我们设计了一种基于显著性检测的权重映射规则来整合小尺度层,从而在融合结果中保留边缘结构。定性和定量实验结果都证实了所提出的方法优于 11 种最先进的图像融合方法。我们的方法在保留更多纹理和实现更高对比度方面表现出色,这在监控任务中证明是有利的。
{"title":"Infrared and visible image fusion based on hybrid multi-scale decomposition and adaptive contrast enhancement","authors":"Yueying Luo,&nbsp;Kangjian He,&nbsp;Dan Xu,&nbsp;Hongzhen Shi,&nbsp;Wenxia Yin","doi":"10.1016/j.image.2024.117228","DOIUrl":"10.1016/j.image.2024.117228","url":null,"abstract":"<div><div>Effectively fusing infrared and visible images enhances the visibility of infrared target information while capturing visual details. Balancing the brightness and contrast of the fusion image adequately has posed a significant challenge. Moreover, preserving detailed information in fusion images has been problematic. To address these issues, this paper proposes a fusion algorithm based on multi-scale decomposition and adaptive contrast enhancement. Initially, we present a hybrid multi-scale decomposition method aimed at extracting valuable information comprehensively from the source image. Subsequently, we advance an adaptive base layer optimization approach to regulate the brightness and contrast of the resultant fusion image. Lastly, we design a weight mapping rule grounded in saliency detection to integrate small-scale layers, thereby conserving the edge structure within the fusion outcome. Both qualitative and quantitative experimental results affirm the superiority of the proposed method over 11 state-of-the-art image fusion methods. Our method excels in preserving more texture and achieving higher contrast, which proves advantageous for monitoring tasks.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117228"},"PeriodicalIF":3.4,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Struck-out handwritten word detection and restoration for automatic descriptive answer evaluation 用于自动描述性答案评估的划掉的手写单词检测和修复
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-09-30 DOI: 10.1016/j.image.2024.117214
Dajian Zhong , Shivakumara Palaiahnakote , Umapada Pal , Yue Lu
Unlike objective type evaluation, descriptive answer evaluation is challenging due to unpredictable answers and free writing style of answers. Because of these, descriptive answer evaluation has received special attention from many researchers. Automatic answer evaluation is useful for the following situations. It can avoid human intervention for marking, eliminates bias marking and most important is that it can save huge manpower. To develop an efficient and accurate system, there are several open challenges. One such open challenge is cleaning the document, which includes struck-out words removal and restoring the struck-out words. In this paper, we have proposed a system for struck-out handwritten word detection and restoration for automatic descriptive answer evaluation. The work has two stages. In the first stage, we explore the combination of ResNet50 and the diagonal line (principal and secondary diagonal lines) segmentation module for detecting words and then classifying struck-out words using a classification network. In the second stage, we explore the combination of U-Net as a backbone and Bi-LSTM for predicting pixels that represent actual text information of the struck-out words based on the relationship between sequences of pixels for restoration. Experimental results on our dataset and standard datasets show that the proposed model is impressive for struck-out word detection and restoration. A comparative study with the state-of-the-art methods shows that the proposed approach outperforms the existing models in terms of struck-out word detection and restoration.
与客观类型的评价不同,描述性答案评价因其答案的不可预测性和答案的自由写作风格而具有挑战性。因此,描述性答案评价受到了许多研究人员的特别关注。自动答案评价在以下情况下非常有用。它可以避免人工干预评分,消除评分偏差,最重要的是可以节省大量人力。要开发一个高效、准确的系统,还面临着一些挑战。其中一个挑战就是文档清理,包括删除被删除的单词和恢复被删除的单词。在本文中,我们提出了一种用于自动描述性答案评估的划掉手写单词检测和恢复系统。这项工作分为两个阶段。在第一阶段,我们探索将 ResNet50 和对角线(主对角线和次对角线)分割模块结合起来检测字词,然后使用分类网络对划去的字词进行分类。在第二阶段,我们探索将 U-Net 作为骨干网与 Bi-LSTM 结合,根据像素序列之间的关系预测代表被删除字词实际文本信息的像素,以进行还原。在我们的数据集和标准数据集上的实验结果表明,所提出的模型在检测和还原被删除的单词方面效果显著。与最先进方法的比较研究表明,所提出的方法在删除字检测和还原方面优于现有模型。
{"title":"Struck-out handwritten word detection and restoration for automatic descriptive answer evaluation","authors":"Dajian Zhong ,&nbsp;Shivakumara Palaiahnakote ,&nbsp;Umapada Pal ,&nbsp;Yue Lu","doi":"10.1016/j.image.2024.117214","DOIUrl":"10.1016/j.image.2024.117214","url":null,"abstract":"<div><div>Unlike objective type evaluation, descriptive answer evaluation is challenging due to unpredictable answers and free writing style of answers. Because of these, descriptive answer evaluation has received special attention from many researchers. Automatic answer evaluation is useful for the following situations. It can avoid human intervention for marking, eliminates bias marking and most important is that it can save huge manpower. To develop an efficient and accurate system, there are several open challenges. One such open challenge is cleaning the document, which includes struck-out words removal and restoring the struck-out words. In this paper, we have proposed a system for struck-out handwritten word detection and restoration for automatic descriptive answer evaluation. The work has two stages. In the first stage, we explore the combination of ResNet50 and the diagonal line (principal and secondary diagonal lines) segmentation module for detecting words and then classifying struck-out words using a classification network. In the second stage, we explore the combination of U-Net as a backbone and Bi-LSTM for predicting pixels that represent actual text information of the struck-out words based on the relationship between sequences of pixels for restoration. Experimental results on our dataset and standard datasets show that the proposed model is impressive for struck-out word detection and restoration. A comparative study with the state-of-the-art methods shows that the proposed approach outperforms the existing models in terms of struck-out word detection and restoration.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117214"},"PeriodicalIF":3.4,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142416777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full-reference calibration-free image quality assessment 无需全面参考校准的图像质量评估
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-09-23 DOI: 10.1016/j.image.2024.117212
Paolo Giannitrapani, Elio D. Di Claudio , Giovanni Jacovitti
Objective Image Quality Assessment (IQA) methods often lack of linearity of their quality estimates with respect to scores expressed by human subjects and therefore IQA metrics undergo a calibration process based on subjective quality examples. However, example-based training presents a challenge in terms of generalization hampering result comparison across different applications and operative conditions. In this paper, new Full Reference (FR) techniques, providing estimates linearly correlated with human scores without using calibration are introduced. We show that on natural images, application of estimation theory and psychophysical principles to images degraded by Gaussian blur leads to a so-called canonical IQA method, whose estimates are linearly correlated to both the subjective scores and the viewing distance. Then, we show that any mainstream IQA methods can be reconducted to the canonical method by converting its metric based on a unique specimen image. The proposed scheme is extended to wide classes of degraded images, e.g. noisy and compressed images. The resulting calibration-free FR IQA methods allows for comparability and interoperability across different imaging systems and on different viewing distances. A comparison of their statistical performance with respect to state-of-the-art calibration prone methods is finally provided, showing that the presented model is a valid alternative to the final 5-parameter calibration step of IQA methods, and the two parameters of the model have a clear operational meaning and are simply determined in practical applications. The enhanced performance are achieved across multiple viewing distance databases by independently realigning the blur values associated with each distance.
客观图像质量评估(IQA)方法的质量估计值往往与人类受试者的评分缺乏线性关系,因此 IQA 指标需要经过基于主观质量示例的校准过程。然而,基于示例的训练在通用性方面存在挑战,妨碍了不同应用和手术条件下的结果比较。本文介绍了新的完全参考(FR)技术,无需校准即可提供与人类评分线性相关的估计值。我们表明,在自然图像上,将估算理论和心理物理学原理应用于高斯模糊退化的图像,会产生一种所谓的典型 IQA 方法,其估算值与主观分数和观看距离均呈线性相关。然后,我们证明,任何主流的 IQA 方法都可以根据唯一的样本图像转换其度量标准,从而重新导入典型方法。所提出的方案可扩展到多种劣化图像,如噪声图像和压缩图像。由此产生的免校准 FR IQA 方法可在不同成像系统和不同观察距离上进行比较和互操作。最后提供了与最先进的易校准方法的统计性能比较,表明所提出的模型是 IQA 方法最后 5 参数校准步骤的有效替代方案,模型的两个参数具有明确的操作意义,在实际应用中只需简单确定。通过独立地重新调整与每个距离相关的模糊值,在多个视距数据库中实现了增强的性能。
{"title":"Full-reference calibration-free image quality assessment","authors":"Paolo Giannitrapani,&nbsp;Elio D. Di Claudio ,&nbsp;Giovanni Jacovitti","doi":"10.1016/j.image.2024.117212","DOIUrl":"10.1016/j.image.2024.117212","url":null,"abstract":"<div><div>Objective Image Quality Assessment (IQA) methods often lack of linearity of their quality estimates with respect to scores expressed by human subjects and therefore IQA metrics undergo a calibration process based on subjective quality examples. However, example-based training presents a challenge in terms of generalization hampering result comparison across different applications and operative conditions. In this paper, new Full Reference (FR) techniques, providing estimates linearly correlated with human scores without using calibration are introduced. We show that on natural images, application of estimation theory and psychophysical principles to images degraded by Gaussian blur leads to a so-called canonical IQA method, whose estimates are linearly correlated to both the subjective scores and the viewing distance. Then, we show that any mainstream IQA methods can be reconducted to the canonical method by converting its metric based on a unique specimen image. The proposed scheme is extended to wide classes of degraded images, e.g. noisy and compressed images. The resulting calibration-free FR IQA methods allows for comparability and interoperability across different imaging systems and on different viewing distances. A comparison of their statistical performance with respect to state-of-the-art calibration prone methods is finally provided, showing that the presented model is a valid alternative to the final 5-parameter calibration step of IQA methods, and the two parameters of the model have a clear operational meaning and are simply determined in practical applications. The enhanced performance are achieved across multiple viewing distance databases by independently realigning the blur values associated with each distance.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117212"},"PeriodicalIF":3.4,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142328081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved multi-focus image fusion using online convolutional sparse coding based on sample-dependent dictionary 利用基于样本依赖字典的在线卷积稀疏编码改进多焦图像融合
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-09-19 DOI: 10.1016/j.image.2024.117213
Sidi He , Chengfang Zhang , Haoyue Li , Ziliang Feng
Multi-focus image fusion merges multiple images captured from different focused regions of a scene to create a fully-focused image. Convolutional sparse coding (CSC) methods are commonly employed for accurate extraction of focused regions, but they often disregard computational costs. To overcome this, an online convolutional sparse coding (OCSC) technique was introduced, but its performance is still limited by the number of filters used, affecting overall performance negatively. To address these limitations, a novel approach called Sample-Dependent Dictionary-based Online Convolutional Sparse Coding (SCSC) was proposed. SCSC enables the utilization of additional filters while maintaining low time and space complexity for processing high-dimensional or large data. Leveraging the computational efficiency and effective global feature extraction of SCSC, we propose a novel method for multi-focus image fusion. Our method involves a two-layer decomposition of each source image, yielding a base layer capturing the predominant features and a detail layer containing finer details. The amalgamation of the fused base and detail layers culminates in the reconstruction of the final image. The proposed method significantly mitigates artifacts, preserves fine details at the focus boundary, and demonstrates notable enhancements in both visual quality and objective evaluation of multi-focus image fusion.
多焦点图像融合将从场景的不同焦点区域捕捉到的多幅图像合并在一起,生成一幅全焦点图像。卷积稀疏编码(CSC)方法通常用于精确提取聚焦区域,但它们往往不考虑计算成本。为了克服这一问题,人们引入了在线卷积稀疏编码(OCSC)技术,但其性能仍然受到所用滤波器数量的限制,从而对整体性能产生负面影响。为了解决这些限制,有人提出了一种称为基于采样依赖字典的在线卷积稀疏编码(SCSC)的新方法。SCSC 可以利用额外的滤波器,同时保持较低的时间和空间复杂度,以处理高维或大型数据。利用 SCSC 的计算效率和有效的全局特征提取,我们提出了一种用于多焦点图像融合的新方法。我们的方法包括对每幅源图像进行两层分解,产生一个捕捉主要特征的基础层和一个包含更精细细节的细节层。融合后的基础层和细节层最终重建出最终图像。所提出的方法大大减少了伪影,保留了焦点边界的精细细节,在视觉质量和多焦点图像融合的客观评估方面都有明显的改进。
{"title":"Improved multi-focus image fusion using online convolutional sparse coding based on sample-dependent dictionary","authors":"Sidi He ,&nbsp;Chengfang Zhang ,&nbsp;Haoyue Li ,&nbsp;Ziliang Feng","doi":"10.1016/j.image.2024.117213","DOIUrl":"10.1016/j.image.2024.117213","url":null,"abstract":"<div><div>Multi-focus image fusion merges multiple images captured from different focused regions of a scene to create a fully-focused image. Convolutional sparse coding (CSC) methods are commonly employed for accurate extraction of focused regions, but they often disregard computational costs. To overcome this, an online convolutional sparse coding (OCSC) technique was introduced, but its performance is still limited by the number of filters used, affecting overall performance negatively. To address these limitations, a novel approach called Sample-Dependent Dictionary-based Online Convolutional Sparse Coding (SCSC) was proposed. SCSC enables the utilization of additional filters while maintaining low time and space complexity for processing high-dimensional or large data. Leveraging the computational efficiency and effective global feature extraction of SCSC, we propose a novel method for multi-focus image fusion. Our method involves a two-layer decomposition of each source image, yielding a base layer capturing the predominant features and a detail layer containing finer details. The amalgamation of the fused base and detail layers culminates in the reconstruction of the final image. The proposed method significantly mitigates artifacts, preserves fine details at the focus boundary, and demonstrates notable enhancements in both visual quality and objective evaluation of multi-focus image fusion.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"130 ","pages":"Article 117213"},"PeriodicalIF":3.4,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142311314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Signal Processing-Image Communication
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1