首页 > 最新文献

Signal Processing-Image Communication最新文献

英文 中文
MA-MNN: Multi-flow attentive memristive neural network for multi-task image restoration MA-MNN:用于多任务图像恢复的多流注意记忆神经网络
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-01 Epub Date: 2025-04-28 DOI: 10.1016/j.image.2025.117336
Peng He , Lin Zhang , Yu Yang , Yue Zhou , Shukai Duan , Xiaofang Hu
Images taken in rainy, hazy, and low-light environments severely hinder the performance of outdoor computer vision systems. Most data-driven image restoration methods are task-specific and computationally intensive, whereas the capture and processing of degraded images occur largely in end-side devices with limited computing resources. Motivated by addressing the above issues, a novel software and hardware co-designed image restoration method named multi-flow attentive memristive neural network (MA-MNN) is proposed in this paper, which combines a deep learning algorithm and the nanoscale device memristor. The multi-level complementary spatial contextual information is exploited by the multi-flow aggregation block. The dense connection design is adopted to provide smooth transportation across units and alleviate the vanishing-gradient. The supervised calibration block is designed to facilitate achieving the dual-attention mechanism that helps the model identify and re-calibrate the transformed features. Besides, a hardware implementation scheme based on memristors is designed to provide low energy consumption solutions for embedded applications. Extensive experiments in image deraining, image dehazing and low-light image enhancement have shown that the proposed method is highly competitive with over 20 state-of-the-art methods.
在下雨、朦胧和低光环境中拍摄的图像严重阻碍了室外计算机视觉系统的性能。大多数数据驱动的图像恢复方法是特定任务和计算密集型的,而退化图像的捕获和处理主要发生在计算资源有限的端端设备中。为解决上述问题,本文提出了一种软硬件协同设计的图像恢复方法——多流关注忆阻神经网络(MA-MNN),该方法将深度学习算法与纳米级器件忆阻相结合。多流聚合块利用多层次互补的空间上下文信息。采用密集连接设计,保证单元间运输顺畅,减轻梯度消失。监督校准块旨在促进实现双注意机制,帮助模型识别和重新校准转换后的特征。此外,设计了基于忆阻器的硬件实现方案,为嵌入式应用提供了低能耗的解决方案。在图像去噪、图像去雾和弱光图像增强方面的大量实验表明,该方法与20多种最先进的方法具有很强的竞争力。
{"title":"MA-MNN: Multi-flow attentive memristive neural network for multi-task image restoration","authors":"Peng He ,&nbsp;Lin Zhang ,&nbsp;Yu Yang ,&nbsp;Yue Zhou ,&nbsp;Shukai Duan ,&nbsp;Xiaofang Hu","doi":"10.1016/j.image.2025.117336","DOIUrl":"10.1016/j.image.2025.117336","url":null,"abstract":"<div><div>Images taken in rainy, hazy, and low-light environments severely hinder the performance of outdoor computer vision systems. Most data-driven image restoration methods are task-specific and computationally intensive, whereas the capture and processing of degraded images occur largely in end-side devices with limited computing resources. Motivated by addressing the above issues, a novel software and hardware co-designed image restoration method named multi-flow attentive memristive neural network (MA-MNN) is proposed in this paper, which combines a deep learning algorithm and the nanoscale device memristor. The multi-level complementary spatial contextual information is exploited by the multi-flow aggregation block. The dense connection design is adopted to provide smooth transportation across units and alleviate the vanishing-gradient. The supervised calibration block is designed to facilitate achieving the dual-attention mechanism that helps the model identify and re-calibrate the transformed features. Besides, a hardware implementation scheme based on memristors is designed to provide low energy consumption solutions for embedded applications. Extensive experiments in image deraining, image dehazing and low-light image enhancement have shown that the proposed method is highly competitive with over 20 state-of-the-art methods.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117336"},"PeriodicalIF":3.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143899899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mining the Salient Spatio-Temporal Feature with S2TF-Net for action recognition 基于S2TF-Net的动作识别显著时空特征挖掘
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-01 Epub Date: 2025-07-15 DOI: 10.1016/j.image.2025.117381
Xiaoxi Liu , Ju Liu , Lingchen Gu , Yafeng Li , Xiaojun Chang , Feiping Nie
Recently, 3D Convolutional Neural Networks (3D ConvNets) have been widely exploited for action recognition and achieved satisfying performance. However, the superior action features are often drowned in numerous irrelevant information, which immensely enhances the difficulty of video representation. To find a generic cost-efficient approach to balance the parameters and performance, we present a novel network to mine the Salient Spatio-Temporal Feature based on 3D ConvNets backbone for action recognition, termed as S2TF-Net. Firstly, we extract the salient features of each 3D residual block by constructing a multi-scale module for Salient Semantic Feature mining (SSF-Module). Then, with the aim of preserving the salient features in pooling operations, we establish a Two-branch Salient Feature Preserving Module (TSFP-Module). Besides, these above two modules with proper loss function can collaborate in an “easy-to-concat” fashion for most 3D ResNet backbones to classify more accurately albeit in the shallower network. Finally, we conduct experiments over three popular action recognition datasets, where our S2TF-Net is competitive compared with the deeper 3D backbones or current state-of-the-art results. Treating the P3D, 3D ResNet, Non-local I3D and X3D as baseline, the proposed method improves them to varying degrees. Particularly, for Non-local I3D ResNet, the proposed S2TF-Net enhances 4.1%, 3.0% and 4.6% in Kinetics-400, UCF101 and HMDB51 datasets, achieving the accuracy of 74.8%, 95.1% and 80.9%. We hope this study will provide useful inspiration and experience for future research about more cost-effective methods. Code is released here: https://github.com/xiaoxiAries/S2TFNet.
近年来,三维卷积神经网络(3D ConvNets)在动作识别中得到了广泛的应用,并取得了令人满意的效果。然而,优秀的动作特征往往被淹没在大量无关信息中,这极大地增加了视频表示的难度。为了找到一种通用的经济有效的方法来平衡参数和性能,我们提出了一种新的网络来挖掘突出的时空特征基于三维卷积神经网络骨干网的动作识别,称为S2TF-Net。首先,通过构建显著语义特征挖掘(SSF-Module)的多尺度模块,提取每个三维残差块的显著特征;然后,以保留池化操作中的显著特征为目的,我们建立了一个两分支显著特征保留模块(TSFP-Module)。此外,上述两个具有适当损失函数的模块可以以“易于连接”的方式协作,使大多数3D ResNet骨干网在较浅的网络中更准确地分类。最后,我们在三个流行的动作识别数据集上进行了实验,其中我们的S2TF-Net与更深的3D主干或当前最先进的结果相比具有竞争力。以P3D、3D ResNet、Non-local I3D和X3D为基准,该方法均有不同程度的改进。特别是对于非本地I3D ResNet,本文提出的S2TF-Net在kinect -400、UCF101和HMDB51数据集上的准确率分别提高了4.1%、3.0%和4.6%,分别达到74.8%、95.1%和80.9%。希望本研究能为今后研究更经济有效的方法提供有益的启示和经验。代码在这里发布:https://github.com/xiaoxiAries/S2TFNet。
{"title":"Mining the Salient Spatio-Temporal Feature with S2TF-Net for action recognition","authors":"Xiaoxi Liu ,&nbsp;Ju Liu ,&nbsp;Lingchen Gu ,&nbsp;Yafeng Li ,&nbsp;Xiaojun Chang ,&nbsp;Feiping Nie","doi":"10.1016/j.image.2025.117381","DOIUrl":"10.1016/j.image.2025.117381","url":null,"abstract":"<div><div>Recently, 3D Convolutional Neural Networks (3D ConvNets) have been widely exploited for action recognition and achieved satisfying performance. However, the superior action features are often drowned in numerous irrelevant information, which immensely enhances the difficulty of video representation. To find a generic cost-efficient approach to balance the parameters and performance, we present a novel network to mine the <strong>S</strong>alient <strong>S</strong>patio-<strong>T</strong>emporal <strong>F</strong>eature based on 3D ConvNets backbone for action recognition, termed as S<sup>2</sup>TF-Net. Firstly, we extract the salient features of each 3D residual block by constructing a multi-scale module for <strong>S</strong>alient <strong>S</strong>emantic <strong>F</strong>eature mining (SSF-Module). Then, with the aim of preserving the salient features in pooling operations, we establish a <strong>T</strong>wo-branch <strong>S</strong>alient <strong>F</strong>eature <strong>P</strong>reserving Module (TSFP-Module). Besides, these above two modules with proper loss function can collaborate in an “easy-to-concat” fashion for most 3D ResNet backbones to classify more accurately albeit in the shallower network. Finally, we conduct experiments over three popular action recognition datasets, where our S<sup>2</sup>TF-Net is competitive compared with the deeper 3D backbones or current state-of-the-art results. Treating the P3D, 3D ResNet, Non-local I3D and X3D as baseline, the proposed method improves them to varying degrees. Particularly, for Non-local I3D ResNet, the proposed S<sup>2</sup>TF-Net enhances 4.1%, 3.0% and 4.6% in Kinetics-400, UCF101 and HMDB51 datasets, achieving the accuracy of 74.8%, 95.1% and 80.9%. We hope this study will provide useful inspiration and experience for future research about more cost-effective methods. Code is released here: <span><span>https://github.com/xiaoxiAries/S2TFNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117381"},"PeriodicalIF":3.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144633876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple-image encryption algorithm based on S-boxes and DNA sequences 基于s盒和DNA序列的多图像加密算法
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-01 Epub Date: 2025-05-25 DOI: 10.1016/j.image.2025.117353
Muhammad Umair Safdar , Tariq Shah , Asif Ali
Image encryption is crucial for safeguarding sensitive visual data; however, traditional methods often encounter challenges regarding efficiency and adaptability to the unique characteristics of images. This research is motivated by the potential of ring-based algebraic structures to develop lightweight, secure, and efficient encryption schemes specifically designed for image data. The article presents a novel approach to image encryption in cryptography using a local ring algebraic structure. The proposed method involves encrypting multiple images by constructing substitution boxes from subsets, which are not subgroups but have identity and invertibility axioms. The challenge of using subsets for encryption purposes is addressed by taking unit elements of the ring, picking a subgroup, and splitting it into two subsets. The substitution box is generated by one of the subsets and used for the substitution process, while the other subset is mapped to the Galois field. It constructs the substitution box and is used for diffusion. A DNA sequence is applied to the red, green, and blue channels of the image, and a key is generated by hashing the image and using a subset of the subgroup of units of the ring. Finally, all channels are XORed with the key. The performance of the proposed scheme is evaluated using different analyses, and it is found that the scheme outperforms existing approaches. This approach presents a promising solution for image encryption in cryptography.
图像加密对于保护敏感的视觉数据至关重要;然而,传统的方法在效率和对图像独特性的适应性方面经常遇到挑战。这项研究的动机是基于环的代数结构的潜力,以开发专门为图像数据设计的轻量级,安全和有效的加密方案。本文提出了一种利用局部环代数结构实现密码学中图像加密的新方法。该方法通过从子集构造替换盒来加密多个图像,这些子集不是子群,但具有恒等和可逆性公理。通过取环的单位元素,选择一个子群,并将其分成两个子集,解决了为加密目的使用子集的挑战。替换框由其中一个子集生成并用于替换过程,而另一个子集则映射到Galois字段。它构造了代换箱,用于扩散。将DNA序列应用于图像的红色、绿色和蓝色通道,并通过散列图像和使用环单元子组的子集来生成密钥。最后,所有通道都使用密钥进行xor。采用不同的分析方法对该方案的性能进行了评估,发现该方案优于现有的方法。这种方法为密码学中的图像加密提供了一种很有前途的解决方案。
{"title":"Multiple-image encryption algorithm based on S-boxes and DNA sequences","authors":"Muhammad Umair Safdar ,&nbsp;Tariq Shah ,&nbsp;Asif Ali","doi":"10.1016/j.image.2025.117353","DOIUrl":"10.1016/j.image.2025.117353","url":null,"abstract":"<div><div>Image encryption is crucial for safeguarding sensitive visual data; however, traditional methods often encounter challenges regarding efficiency and adaptability to the unique characteristics of images. This research is motivated by the potential of ring-based algebraic structures to develop lightweight, secure, and efficient encryption schemes specifically designed for image data. The article presents a novel approach to image encryption in cryptography using a local ring algebraic structure. The proposed method involves encrypting multiple images by constructing substitution boxes from subsets, which are not subgroups but have identity and invertibility axioms. The challenge of using subsets for encryption purposes is addressed by taking unit elements of the ring, picking a subgroup, and splitting it into two subsets. The substitution box is generated by one of the subsets and used for the substitution process, while the other subset is mapped to the Galois field. It constructs the substitution box and is used for diffusion. A DNA sequence is applied to the red, green, and blue channels of the image, and a key is generated by hashing the image and using a subset of the subgroup of units of the ring. Finally, all channels are XORed with the key. The performance of the proposed scheme is evaluated using different analyses, and it is found that the scheme outperforms existing approaches. This approach presents a promising solution for image encryption in cryptography.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117353"},"PeriodicalIF":3.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144177675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Camera calibration using property of asymptotes with application to sports scenes 摄像机的渐近线标定及其在运动场景中的应用
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-01 Epub Date: 2025-04-12 DOI: 10.1016/j.image.2025.117331
Fengli Yang, Xuechun Wang, Yue Zhao
Inspired by Ying's work on the calibration technique, this study proposes a new planar pattern (referred to as the phi-type model hereinafter), which includes a circle and diameter, as the calibration scene. In sports scenarios, such as a soccer match or basketball court, most existing methods require information of the scene points in a three-dimensional space. However, an interesting observation in the midfield is that the centre circle and the halfway line form a phi-type template. A new automatic method using the properties of asymptotes is proposed based on the images of the midfield. All intrinsic parameters of the camera can be determined without any assumptions such as zero skew or unitary aspect ratio. The main advantages of our technique are that it neither involves point or line matching nor does it require the metric information of the model plane. The feasibility and validity of the proposed algorithm were verified by testing the noise sensitivity and performing image metric rectification.
本研究受应在标定技术方面工作的启发,提出了一种新的平面模式(以下简称phi型模型),其中包含一个圆和直径作为标定场景。在体育场景中,如足球比赛或篮球场,大多数现有方法都需要三维空间中场景点的信息。然而,在中场观察到的一个有趣现象是,中线和中线形成了一个phi型模板。在中场图像的基础上,提出了一种利用渐近线特性的自动定位方法。相机的所有固有参数都可以在没有任何假设的情况下确定,例如零倾斜或单一宽高比。我们的技术的主要优点是它既不涉及点或线匹配,也不需要模型平面的度量信息。通过噪声灵敏度测试和图像度量校正,验证了该算法的可行性和有效性。
{"title":"Camera calibration using property of asymptotes with application to sports scenes","authors":"Fengli Yang,&nbsp;Xuechun Wang,&nbsp;Yue Zhao","doi":"10.1016/j.image.2025.117331","DOIUrl":"10.1016/j.image.2025.117331","url":null,"abstract":"<div><div>Inspired by Ying's work on the calibration technique, this study proposes a new planar pattern (referred to as the phi-type model hereinafter), which includes a circle and diameter, as the calibration scene. In sports scenarios, such as a soccer match or basketball court, most existing methods require information of the scene points in a three-dimensional space. However, an interesting observation in the midfield is that the centre circle and the halfway line form a phi-type template. A new automatic method using the properties of asymptotes is proposed based on the images of the midfield. All intrinsic parameters of the camera can be determined without any assumptions such as zero skew or unitary aspect ratio. The main advantages of our technique are that it neither involves point or line matching nor does it require the metric information of the model plane. The feasibility and validity of the proposed algorithm were verified by testing the noise sensitivity and performing image metric rectification.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117331"},"PeriodicalIF":3.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143859340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Driver distraction detection based on adaptive tiny targets and lightweight networks 基于自适应微小目标和轻量网络的驾驶员分心检测
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-01 Epub Date: 2025-05-15 DOI: 10.1016/j.image.2025.117342
Shuangshuang Gu , Bin Wen , Shiyao Chen , Yuanyuan Li , Guanqiu Qi , Linhong Shuai , Zhiqin Zhu
Driver distraction detection is critical to reducing road traffic accidents and increasing the efficiency of advanced driver assistance systems. Real-time lightweight models are especially important for in-vehicle devices with limited computing resources. However, most existing methods focus on designing lighter network architectures and ignore the performance loss when detecting tiny targets. In order to realize the collaborative optimization of tiny target detection accuracy and network lightweight, a driver distraction detection method ATD2Net based on adaptive tiny target detection and lightweight networks is proposed. This method aims to reduce model complexity while fully capturing target features for accurate detection. ATD2Net consists of three core modules, Channel Reconstruction Perception Module (CRPM), Dynamic Spatial Self-locking Module (DSSM) and Structural Feedback Optimization Module (SFOM). CRPM reconfigures channels and reconstructs them into batch dimensions, uses parallel strategies to perceive interactive features between channels, and significantly enhances feature extraction capabilities. DSSM adopts dynamic locking and adaptive spatial selection mechanisms to capture multi-scale features while injecting adaptive spatial information. It effectively aggregates instance features and reduces the interference of conflicting information and background information, thereby improving the detection ability of tiny targets. SFOM uses dependency trees to model inter-layer relationships and integrate coupling parameters into groupings. It uses a sparse strategy to remove unimportant parameters, achieving lightweight modeling while balancing accuracy and speed. Experimental results show that ATD2Net is superior to the latest methods in driver distraction detection, showing excellent performance and good application prospects.
驾驶员分心检测对于减少道路交通事故和提高先进驾驶员辅助系统的效率至关重要。实时轻量级模型对于计算资源有限的车载设备尤为重要。然而,现有的方法大多侧重于设计轻量级的网络架构,忽略了检测微小目标时的性能损失。为了实现微小目标检测精度和网络轻量化的协同优化,提出了一种基于自适应微小目标检测和轻量化网络的驾驶员分心检测方法ATD2Net。该方法旨在降低模型复杂度的同时,充分捕捉目标特征,实现准确检测。ATD2Net由三个核心模块组成:信道重建感知模块(CRPM)、动态空间自锁模块(DSSM)和结构反馈优化模块(sfm)。CRPM对通道进行重新配置并重构成批处理维度,利用并行策略感知通道间的交互特征,显著增强了特征提取能力。DSSM采用动态锁定和自适应空间选择机制捕获多尺度特征,同时注入自适应空间信息。它有效地聚合了实例特征,减少了冲突信息和背景信息的干扰,从而提高了对微小目标的检测能力。sfm使用依赖树对层间关系建模,并将耦合参数集成到分组中。它使用稀疏策略去除不重要的参数,在平衡精度和速度的同时实现轻量级建模。实验结果表明,ATD2Net在驾驶员分心检测方面优于最新方法,表现出优异的性能和良好的应用前景。
{"title":"Driver distraction detection based on adaptive tiny targets and lightweight networks","authors":"Shuangshuang Gu ,&nbsp;Bin Wen ,&nbsp;Shiyao Chen ,&nbsp;Yuanyuan Li ,&nbsp;Guanqiu Qi ,&nbsp;Linhong Shuai ,&nbsp;Zhiqin Zhu","doi":"10.1016/j.image.2025.117342","DOIUrl":"10.1016/j.image.2025.117342","url":null,"abstract":"<div><div>Driver distraction detection is critical to reducing road traffic accidents and increasing the efficiency of advanced driver assistance systems. Real-time lightweight models are especially important for in-vehicle devices with limited computing resources. However, most existing methods focus on designing lighter network architectures and ignore the performance loss when detecting tiny targets. In order to realize the collaborative optimization of tiny target detection accuracy and network lightweight, a driver distraction detection method ATD<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>Net based on adaptive tiny target detection and lightweight networks is proposed. This method aims to reduce model complexity while fully capturing target features for accurate detection. ATD<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>Net consists of three core modules, Channel Reconstruction Perception Module (CRPM), Dynamic Spatial Self-locking Module (DSSM) and Structural Feedback Optimization Module (SFOM). CRPM reconfigures channels and reconstructs them into batch dimensions, uses parallel strategies to perceive interactive features between channels, and significantly enhances feature extraction capabilities. DSSM adopts dynamic locking and adaptive spatial selection mechanisms to capture multi-scale features while injecting adaptive spatial information. It effectively aggregates instance features and reduces the interference of conflicting information and background information, thereby improving the detection ability of tiny targets. SFOM uses dependency trees to model inter-layer relationships and integrate coupling parameters into groupings. It uses a sparse strategy to remove unimportant parameters, achieving lightweight modeling while balancing accuracy and speed. Experimental results show that ATD<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>Net is superior to the latest methods in driver distraction detection, showing excellent performance and good application prospects.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117342"},"PeriodicalIF":3.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144090347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-fish tracking with underwater image enhancement by deep network in marine ecosystems 海洋生态系统中基于深度网络的水下图像增强多鱼跟踪
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-01 Epub Date: 2025-04-23 DOI: 10.1016/j.image.2025.117321
Prerana Mukherjee , Srimanta Mandal , Koteswar Rao Jerripothula , Vrishabhdhwaj Maharshi , Kashish Katara
Tracking marine life plays a crucial role in understanding migration patterns, movements, and population growth of underwater species. Deep learning-based fish-tracking networks have been actively researched and developed, yielding promising results. In this work, we propose an end-to-end deep learning framework for tracking fish in unconstrained marine environments. The core innovation of our approach is a Siamese-based architecture integrated with an image enhancement module, designed to measure appearance similarity effectively. The enhancement module consists of convolutional layers and a squeeze-and-excitation block, pre-trained on degraded and clean image pairs to address underwater distortions. This enhanced feature representation is leveraged within the Siamese framework to compute an appearance similarity score, which is further refined using prediction scores based on fish movement patterns. To ensure robust tracking, we combine the appearance similarity score, prediction score, and IoU-based similarity score to generate fish trajectories using the Hungarian algorithm. Our framework significantly reduces ID switches by 35.6% on the Fish4Knowledge dataset and 3.8% on the GMOT-40 fish category, all while maintaining high tracking accuracy. The source code of this work is available here: https://github.com/srimanta-mandal/Multi-Fish-Tracking-with-Underwater-Image-Enhancement.
跟踪海洋生物在了解水下物种的迁徙模式、运动和种群增长方面起着至关重要的作用。基于深度学习的鱼类跟踪网络已经得到了积极的研究和开发,并取得了可喜的成果。在这项工作中,我们提出了一个端到端深度学习框架,用于在无约束的海洋环境中跟踪鱼类。我们方法的核心创新是基于暹罗的架构集成了图像增强模块,旨在有效地测量外观相似性。增强模块由卷积层和压缩激励块组成,对退化和干净的图像对进行预训练,以解决水下失真问题。在Siamese框架中利用这种增强的特征表示来计算外观相似性分数,并使用基于鱼的运动模式的预测分数进一步改进。为了确保跟踪的鲁棒性,我们结合了外观相似性评分、预测评分和基于iou的相似性评分,使用匈牙利算法生成鱼类轨迹。我们的框架在Fish4Knowledge数据集上显著减少了35.6%的ID切换,在GMOT-40鱼类类别上显著减少了3.8%的ID切换,同时保持了很高的跟踪精度。这项工作的源代码可以在这里获得:https://github.com/srimanta-mandal/Multi-Fish-Tracking-with-Underwater-Image-Enhancement。
{"title":"Multi-fish tracking with underwater image enhancement by deep network in marine ecosystems","authors":"Prerana Mukherjee ,&nbsp;Srimanta Mandal ,&nbsp;Koteswar Rao Jerripothula ,&nbsp;Vrishabhdhwaj Maharshi ,&nbsp;Kashish Katara","doi":"10.1016/j.image.2025.117321","DOIUrl":"10.1016/j.image.2025.117321","url":null,"abstract":"<div><div>Tracking marine life plays a crucial role in understanding migration patterns, movements, and population growth of underwater species. Deep learning-based fish-tracking networks have been actively researched and developed, yielding promising results. In this work, we propose an end-to-end deep learning framework for tracking fish in unconstrained marine environments. The core innovation of our approach is a Siamese-based architecture integrated with an image enhancement module, designed to measure appearance similarity effectively. The enhancement module consists of convolutional layers and a squeeze-and-excitation block, pre-trained on degraded and clean image pairs to address underwater distortions. This enhanced feature representation is leveraged within the Siamese framework to compute an appearance similarity score, which is further refined using prediction scores based on fish movement patterns. To ensure robust tracking, we combine the appearance similarity score, prediction score, and IoU-based similarity score to generate fish trajectories using the Hungarian algorithm. Our framework significantly reduces ID switches by 35.6% on the Fish4Knowledge dataset and 3.8% on the GMOT-40 fish category, all while maintaining high tracking accuracy. The source code of this work is available here: <span><span>https://github.com/srimanta-mandal/Multi-Fish-Tracking-with-Underwater-Image-Enhancement</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117321"},"PeriodicalIF":3.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143881235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Object detection-based deep autoencoder hashing image retrieval 基于目标检测的深度自编码器哈希图像检索
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-01 Epub Date: 2025-07-18 DOI: 10.1016/j.image.2025.117384
Uğur Erkan , Ahmet Yilmaz , Abdurrahim Toktas , Qiang Lai , Suo Gao
Image Retrieval (IR), which returns similar images from a large image database, has become an important task as multimedia data grows. Existing studies utilize hash code representing the image features generated from the whole image, including redundant semantics from the background. In this study, a novel Object Detection-based Hashing IR (ODH-IR) scheme using You Only Look Once (YOLO) and an autoencoder is presented to ignore clutter in the images. Integration of YOLO and the autoencoder provides the most representative hash code depending on meaningful objects in the images. The autoencoder is exploited to compress the detected object vector to the desired bit length of the hash code. The ODH-IR scheme is validated by comparison with the state of the art through three well-known datasets in terms of precise metrics. The ODH-IR totally has the best 35 metric results over 36 measurements and the best avg. mean rank of 1.03. Moreover, it is observed from the three illustrative IR examples that it retrieves the most relevant semantics. The results demonstrate that the ODH-IR is an impactful scheme thanks to the effective hashing method through object detection using YOLO and the autoencoder.
随着多媒体数据的增长,从大型图像数据库中返回相似图像的图像检索(IR)已成为一项重要任务。现有的研究利用哈希码表示从整个图像生成的图像特征,包括来自背景的冗余语义。在这项研究中,提出了一种新的基于目标检测的哈希红外(ODH-IR)方案,该方案使用You Only Look Once (YOLO)和自动编码器来忽略图像中的杂波。YOLO和自动编码器的集成根据图像中有意义的对象提供了最具代表性的哈希码。利用自动编码器将检测到的对象向量压缩到哈希码的所需位长度。ODH-IR方案通过三个众所周知的精确度量数据集与最新技术的比较来验证。ODH-IR在36次测量中共获得35个指标的最佳结果,最佳平均排名为1.03。此外,从三个说明性IR示例中可以观察到,它检索了最相关的语义。结果表明,ODH-IR是一种有效的哈希方法,利用YOLO和自编码器进行目标检测。
{"title":"Object detection-based deep autoencoder hashing image retrieval","authors":"Uğur Erkan ,&nbsp;Ahmet Yilmaz ,&nbsp;Abdurrahim Toktas ,&nbsp;Qiang Lai ,&nbsp;Suo Gao","doi":"10.1016/j.image.2025.117384","DOIUrl":"10.1016/j.image.2025.117384","url":null,"abstract":"<div><div>Image Retrieval (IR), which returns similar images from a large image database, has become an important task as multimedia data grows. Existing studies utilize hash code representing the image features generated from the whole image, including redundant semantics from the background. In this study, a novel Object Detection-based Hashing IR (ODH-IR) scheme using You Only Look Once (YOLO) and an autoencoder is presented to ignore clutter in the images. Integration of YOLO and the autoencoder provides the most representative hash code depending on meaningful objects in the images. The autoencoder is exploited to compress the detected object vector to the desired bit length of the hash code. The ODH-IR scheme is validated by comparison with the state of the art through three well-known datasets in terms of precise metrics. The ODH-IR totally has the best 35 metric results over 36 measurements and the best avg. mean rank of 1.03. Moreover, it is observed from the three illustrative IR examples that it retrieves the most relevant semantics. The results demonstrate that the ODH-IR is an impactful scheme thanks to the effective hashing method through object detection using YOLO and the autoencoder.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117384"},"PeriodicalIF":3.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144694958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Higher-order motion calibration and sparsity based outlier correction for video FRUC 视频FRUC的高阶运动标定和稀疏度离群值校正
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-01 Epub Date: 2025-04-17 DOI: 10.1016/j.image.2025.117327
Jiale He , Qunbing Xia , Gaobo Yang , Xiangling Ding
For frame rate up-conversion (FRUC), one of the key challenges is to deal with irregular and large motions that are widely existed in video scenes. However, most existing FRUC works make constant brightness and linear motion assumptions, easily leading to undesirable artifacts such as motion blurriness and frame flickering. In this work, we propose an advanced FRUC work by using a high-order model for motion calibration and a sparse sampling strategy for outlier correction. Unidirectional motion estimation is used to accurately locate object from the previous frame to the following frame in a coarse-to-fine pyramid structure. Then, object motion trajectory is fine-tuned to approximate real motion, and possible outlier regions are located and recorded. Moreover, image sparsity is exploited as the prior knowledge for outlier correction, and the outlier index map is used to design the measurement matrix. Based on the theory of sparse sampling, the outlier regions are reconstructed to eliminate the side effects such as overlapping, holes and blurring. Extensive experimental results demonstrate that the proposed approach outperforms the state-of-the-art FRUC works in terms of both objective and subjective qualities of interpolated frames.
对于帧率上转换(FRUC)来说,处理视频场景中广泛存在的不规则和大运动是一个关键挑战。然而,大多数现有的FRUC作品都假设亮度恒定和线性运动,容易导致运动模糊和帧闪烁等不良伪影。在这项工作中,我们提出了一种先进的FRUC工作,通过使用高阶模型进行运动校准和稀疏采样策略进行离群值校正。在粗精金字塔结构中,采用单向运动估计对目标进行精确定位。然后,对目标运动轨迹进行微调以接近真实运动,并定位和记录可能的异常区域。利用图像稀疏度作为先验知识进行离群值校正,利用离群值索引图设计测量矩阵。基于稀疏采样理论,重构离群区域,消除重叠、空洞和模糊等副作用。大量的实验结果表明,所提出的方法在插值帧的客观和主观质量方面都优于最先进的FRUC工作。
{"title":"Higher-order motion calibration and sparsity based outlier correction for video FRUC","authors":"Jiale He ,&nbsp;Qunbing Xia ,&nbsp;Gaobo Yang ,&nbsp;Xiangling Ding","doi":"10.1016/j.image.2025.117327","DOIUrl":"10.1016/j.image.2025.117327","url":null,"abstract":"<div><div>For frame rate up-conversion (FRUC), one of the key challenges is to deal with irregular and large motions that are widely existed in video scenes. However, most existing FRUC works make constant brightness and linear motion assumptions, easily leading to undesirable artifacts such as motion blurriness and frame flickering. In this work, we propose an advanced FRUC work by using a high-order model for motion calibration and a sparse sampling strategy for outlier correction. Unidirectional motion estimation is used to accurately locate object from the previous frame to the following frame in a coarse-to-fine pyramid structure. Then, object motion trajectory is fine-tuned to approximate real motion, and possible outlier regions are located and recorded. Moreover, image sparsity is exploited as the prior knowledge for outlier correction, and the outlier index map is used to design the measurement matrix. Based on the theory of sparse sampling, the outlier regions are reconstructed to eliminate the side effects such as overlapping, holes and blurring. Extensive experimental results demonstrate that the proposed approach outperforms the state-of-the-art FRUC works in terms of both objective and subjective qualities of interpolated frames.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117327"},"PeriodicalIF":3.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143864458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial expression transformation for anime-style image based on decoder control and attention mask 基于解码器控制和注意面具的动画风格图像面部表情变换
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-01 Epub Date: 2025-05-06 DOI: 10.1016/j.image.2025.117343
Xinhao Rao , Weidong Min , Ziyang Deng , Mengxue Liu
Human facial expression transformation has been extensively studied using Generative Adversarial Networks (GANs) recently. GANs have also shown successful attempts in transforming anime-style images. However, current methods for anime pictures fail to refine the expression control efficiently, leading to control effects weaker than expected. Moreover, it remains challenging to maintain the original anime face identity information while transforming. To address these issues, we propose an expression transformation method for anime-style images. In order to enhance the control effect of discrete emoticon tags, a mapping network is proposed to map them to high-dimensional control information, which is then injected into the network multiple times during transformation. Additionally, for better maintaining the anime face identity information while transforming, an integrated attention mask mechanism is introduced to enable the network's expression control to focus on the expression-related features, while avoiding affecting the unrelated features. Finally, we conduct a large number of experiments to verify the validity of the proposed method, and both quantitative and qualitative evaluations are carried out. The results demonstrate the superiority of our proposed method compared to existing methods based on multi-domain image-to-image translation.
近年来,基于生成对抗网络(GANs)的人脸表情转换技术得到了广泛的研究。gan在转换动画风格的图像方面也有成功的尝试。然而,目前的动画图像控制方法无法有效地细化表情控制,导致控制效果弱于预期。此外,在改造过程中如何保持原动漫的人脸身份信息仍然是一个挑战。为了解决这些问题,我们提出了一种动画风格图像的表情转换方法。为了增强离散emoticon标签的控制效果,提出了一种映射网络,将离散emoticon标签映射到高维控制信息,并在变换过程中多次注入网络。此外,为了在转换过程中更好地保持动漫人脸的身份信息,引入了集成的注意掩模机制,使网络的表情控制集中在与表情相关的特征上,避免影响不相关的特征。最后,我们进行了大量的实验来验证所提出方法的有效性,并进行了定量和定性的评价。结果表明,与现有的基于多域图像到图像转换的方法相比,我们提出的方法具有优越性。
{"title":"Facial expression transformation for anime-style image based on decoder control and attention mask","authors":"Xinhao Rao ,&nbsp;Weidong Min ,&nbsp;Ziyang Deng ,&nbsp;Mengxue Liu","doi":"10.1016/j.image.2025.117343","DOIUrl":"10.1016/j.image.2025.117343","url":null,"abstract":"<div><div>Human facial expression transformation has been extensively studied using Generative Adversarial Networks (GANs) recently. GANs have also shown successful attempts in transforming anime-style images. However, current methods for anime pictures fail to refine the expression control efficiently, leading to control effects weaker than expected. Moreover, it remains challenging to maintain the original anime face identity information while transforming. To address these issues, we propose an expression transformation method for anime-style images. In order to enhance the control effect of discrete emoticon tags, a mapping network is proposed to map them to high-dimensional control information, which is then injected into the network multiple times during transformation. Additionally, for better maintaining the anime face identity information while transforming, an integrated attention mask mechanism is introduced to enable the network's expression control to focus on the expression-related features, while avoiding affecting the unrelated features. Finally, we conduct a large number of experiments to verify the validity of the proposed method, and both quantitative and qualitative evaluations are carried out. The results demonstrate the superiority of our proposed method compared to existing methods based on multi-domain image-to-image translation.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117343"},"PeriodicalIF":3.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144070741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse modeling for image inpainting: A multi-scale morphological patch-based k-SVD and group-based PCA 图像绘制的稀疏建模:基于多尺度形态学斑块的k-SVD和基于分组的PCA
IF 3.4 3区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-10-01 Epub Date: 2025-05-15 DOI: 10.1016/j.image.2025.117341
Amit Soni Arya, Susanta Mukhopadhyay
Image inpainting, a crucial task in image restoration, aims to reconstruct highly degraded images with missing pixels while preserving structural and textural integrity. Traditional patch-based and group-based sparse representation methods often struggle with visual artifacts and over-smoothing, limiting their effectiveness. To address these challenges, we propose a novel multi-scale morphological patch-based and group-based sparse representation learning approach for image inpainting. Our method enhances image inpainting by integrating morphological patch-based sparse representation (M-PSR) learning using k-singular value decomposition (k-SVD) and group-based sparse representation using principal component analysis (PCA) to construct adaptive dictionaries for improved reconstruction accuracy. Additionally, we employ the alternating direction method of multipliers (ADMM) to optimize the integration of morphological patch and group based sparse representations, enhancing restoration quality. Extensive experiments on various degraded images demonstrate that our approach outperforms state-of-the-art methods in terms of the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM). The proposed method effectively reconstructs images corrupted by missing pixels, scratches, and text inlays, achieving superior structural coherence and perceptual quality. This work contributes a robust and efficient solution for image inpainting, offering significant advances in sparse modeling and morphological image processing.
图像修复是图像修复中的一项关键任务,其目的是在保持图像结构和纹理完整性的同时,重建缺失像素的高度退化图像。传统的基于补丁和基于组的稀疏表示方法经常与视觉伪影和过度平滑作斗争,限制了它们的有效性。为了解决这些挑战,我们提出了一种新的基于多尺度形态学斑块和基于组的稀疏表示学习方法。我们的方法通过结合基于形态学补丁的稀疏表示(M-PSR)学习(使用k-奇异值分解(k-SVD))和基于群的稀疏表示(使用主成分分析(PCA))来构建自适应字典以提高重建精度,从而增强图像绘制。此外,我们采用交替方向乘法器(ADMM)优化形态学斑块和基于群的稀疏表示的整合,提高恢复质量。在各种退化图像上进行的大量实验表明,我们的方法在峰值信噪比(PSNR)和结构相似性指数测量(SSIM)方面优于最先进的方法。该方法有效地重建了被缺失像素、划痕和文本嵌入损坏的图像,实现了优异的结构一致性和感知质量。这项工作为图像绘制提供了一个鲁棒和高效的解决方案,在稀疏建模和形态学图像处理方面取得了重大进展。
{"title":"Sparse modeling for image inpainting: A multi-scale morphological patch-based k-SVD and group-based PCA","authors":"Amit Soni Arya,&nbsp;Susanta Mukhopadhyay","doi":"10.1016/j.image.2025.117341","DOIUrl":"10.1016/j.image.2025.117341","url":null,"abstract":"<div><div>Image inpainting, a crucial task in image restoration, aims to reconstruct highly degraded images with missing pixels while preserving structural and textural integrity. Traditional patch-based and group-based sparse representation methods often struggle with visual artifacts and over-smoothing, limiting their effectiveness. To address these challenges, we propose a novel multi-scale morphological patch-based and group-based sparse representation learning approach for image inpainting. Our method enhances image inpainting by integrating morphological patch-based sparse representation (M-PSR) learning using k-singular value decomposition (k-SVD) and group-based sparse representation using principal component analysis (PCA) to construct adaptive dictionaries for improved reconstruction accuracy. Additionally, we employ the alternating direction method of multipliers (ADMM) to optimize the integration of morphological patch and group based sparse representations, enhancing restoration quality. Extensive experiments on various degraded images demonstrate that our approach outperforms state-of-the-art methods in terms of the peak signal-to-noise ratio (PSNR) and the structural similarity index measure (SSIM). The proposed method effectively reconstructs images corrupted by missing pixels, scratches, and text inlays, achieving superior structural coherence and perceptual quality. This work contributes a robust and efficient solution for image inpainting, offering significant advances in sparse modeling and morphological image processing.</div></div>","PeriodicalId":49521,"journal":{"name":"Signal Processing-Image Communication","volume":"138 ","pages":"Article 117341"},"PeriodicalIF":3.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144070742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Signal Processing-Image Communication
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1