首页 > 最新文献

Displays最新文献

英文 中文
Edge-guided interactive fusion of texture and geometric features for Dunhuang mural image inpainting 边缘引导下敦煌壁画图像纹理与几何特征的交互融合
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-22 DOI: 10.1016/j.displa.2025.103297
Rui Tian , Tongchen Wu , Dandan Feng , Zihao Xin , Lulu Wang
To tackle the challenges posed by geometric distortion and texture inconsistency in Dunhuang mural inpainting, this paper proposes an Edge-Guided Interactive Fusion of Texture and Geometric Features (EGIF-Net) for progressive image inpainting. This method integrates texture inpainting with geometric feature reconstruction by adopting a three-stage progressive strategy that effectively leverages both local details and global structural information within the image. In the first stage, edge information is extracted via the Parallel Downsampling Edge and Mask (PDEM) Module to facilitate the reconstruction of damaged geometric structures. The second stage employs the Deformable Interactive Attention Transformer (DIA-Transformer) module to refine local details. In the third stage, global inpainting is achieved through the Hierarchical Normalization-based Multi-scale Fusion (HNMF) module, which preserves both the overall image consistency and the fidelity of detailed reconstruction. Experimental results on Dunhuang mural images across multiple resolutions, as well as the CelebA-HQ, Places2, and Paris StreetView datasets, demonstrate that the proposed method outperforms existing approaches in both subjective evaluations and objective metrics, such as the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM). EGIF-Net demonstrates exceptional performance in handling complex textures and intricate geometric structures, showcasing superior robustness and generalization compared to current inpainting techniques, particularly for large-scale, damaged regions.
针对敦煌壁画绘制中存在的几何失真和纹理不一致问题,提出了一种边缘导向的纹理与几何特征交互融合(EGIF-Net)算法。该方法采用三阶段递进策略,有效地利用图像中的局部细节和全局结构信息,将纹理绘制与几何特征重建相结合。在第一阶段,通过并行下采样边缘和掩模(PDEM)模块提取边缘信息,以便于重建损坏的几何结构。第二阶段采用可变形交互式注意力转换器(DIA-Transformer)模块来细化局部细节。第三阶段,通过基于层次归一化的多尺度融合(HNMF)模块实现全局融合,既保持了图像整体的一致性,又保持了细节重建的保真度。在不同分辨率的敦煌壁画图像以及CelebA-HQ、Places2和巴黎街景数据集上的实验结果表明,该方法在主观评价和客观指标(如峰值信噪比(PSNR)和结构相似性指数(SSIM))方面都优于现有方法。EGIF-Net在处理复杂纹理和复杂几何结构方面表现出卓越的性能,与当前的喷漆技术相比,表现出卓越的鲁棒性和泛化性,特别是对于大规模的受损区域。
{"title":"Edge-guided interactive fusion of texture and geometric features for Dunhuang mural image inpainting","authors":"Rui Tian ,&nbsp;Tongchen Wu ,&nbsp;Dandan Feng ,&nbsp;Zihao Xin ,&nbsp;Lulu Wang","doi":"10.1016/j.displa.2025.103297","DOIUrl":"10.1016/j.displa.2025.103297","url":null,"abstract":"<div><div>To tackle the challenges posed by geometric distortion and texture inconsistency in Dunhuang mural inpainting, this paper proposes an Edge-Guided Interactive Fusion of Texture and Geometric Features (EGIF-Net) for progressive image inpainting. This method integrates texture inpainting with geometric feature reconstruction by adopting a three-stage progressive strategy that effectively leverages both local details and global structural information within the image. In the first stage, edge information is extracted via the Parallel Downsampling Edge and Mask (PDEM) Module to facilitate the reconstruction of damaged geometric structures. The second stage employs the Deformable Interactive Attention Transformer (DIA-Transformer) module to refine local details. In the third stage, global inpainting is achieved through the Hierarchical Normalization-based Multi-scale Fusion (HNMF) module, which preserves both the overall image consistency and the fidelity of detailed reconstruction. Experimental results on Dunhuang mural images across multiple resolutions, as well as the CelebA-HQ, Places2, and Paris StreetView datasets, demonstrate that the proposed method outperforms existing approaches in both subjective evaluations and objective metrics, such as the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM). EGIF-Net demonstrates exceptional performance in handling complex textures and intricate geometric structures, showcasing superior robustness and generalization compared to current inpainting techniques, particularly for large-scale, damaged regions.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103297"},"PeriodicalIF":3.4,"publicationDate":"2025-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on CT image deblurring method based on focal spot intensity distribution 基于焦斑强度分布的CT图像去模糊方法研究
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-21 DOI: 10.1016/j.displa.2025.103291
Fengxiao Li , Guowei Zhong , Haijun Yu , Rifeng Zhou
The finite focal spot of the X-ray source is a fundamental physical bottleneck limiting the spatial resolution of Computed Tomography (CT), as its penumbra blurring severely degrades image detail discernibility. To overcome this limitation, this paper proposes a physics-informed deblurring method. Firstly, to circumvent the challenge of acquiring ideal reference images in real scenarios, we developed a high-fidelity physical forward model to generate a high-quality paired dataset with the help of precise measurement of the focal spot’s 2D intensity distribution by the circular hole edge response backprojection method. Secondly, to learn the inverse mapping from blurred projections to ideal projections, we designed an Enhanced Phase U-Net (EPU-Net) deep learning network, which contains an innovative Eulerian Phase Unit (EPU) module. This module transforms feature maps into the Fourier domain, leveraging the high-fidelity structural information carried by the phase spectrum. Through a phase-attention-driven mechanism, it guides and rectifies the amplitude spectrum information corrupted during blurring. This mechanism enables the network to accurately restore the high-frequency components crucial to image details. Both simulated and physical experiments illustrate that EPU-Net outperforms state-of-the-art algorithms such as RCAN and CMU-Net in terms of Peak Signal-to-Noise Ratio and Feature Similarity. More importantly, in visual quality assessments, EPU-Net successfully restored fine structures indistinguishable by other methods, demonstrating exceptional deblurring performance and robust generalization capability. This study presents a novel approach combining physics-model-driven data generation and deep network-based inverse solution learning to enhance image quality in high-resolution CT systems.
x射线源的有限焦斑是限制计算机断层扫描(CT)空间分辨率的一个基本物理瓶颈,因为它的半影模糊严重降低了图像细节的可分辨性。为了克服这一限制,本文提出了一种基于物理的去模糊方法。首先,为了克服在真实场景中获取理想参考图像的挑战,我们开发了高保真物理正演模型,通过圆孔边缘响应反投影法精确测量焦点光斑的二维强度分布,生成高质量的配对数据集。其次,为了学习从模糊投影到理想投影的逆映射,我们设计了一个增强相位U-Net (EPU- net)深度学习网络,该网络包含一个创新的欧拉相位单元(EPU)模块。该模块利用相位谱所携带的高保真结构信息,将特征映射转换为傅里叶域。通过相位注意驱动机制,对模糊过程中损坏的幅度谱信息进行引导和校正。这种机制使网络能够准确地恢复对图像细节至关重要的高频成分。仿真和物理实验都表明,EPU-Net在峰值信噪比和特征相似度方面优于RCAN和mcu - net等最先进的算法。更重要的是,在视觉质量评估中,EPU-Net成功地恢复了其他方法无法区分的精细结构,展示了出色的去模糊性能和强大的泛化能力。本研究提出了一种结合物理模型驱动的数据生成和基于深度网络的逆解学习的新方法,以提高高分辨率CT系统的图像质量。
{"title":"Research on CT image deblurring method based on focal spot intensity distribution","authors":"Fengxiao Li ,&nbsp;Guowei Zhong ,&nbsp;Haijun Yu ,&nbsp;Rifeng Zhou","doi":"10.1016/j.displa.2025.103291","DOIUrl":"10.1016/j.displa.2025.103291","url":null,"abstract":"<div><div>The finite focal spot of the X-ray source is a fundamental physical bottleneck limiting the spatial resolution of Computed Tomography (CT), as its penumbra blurring severely degrades image detail discernibility. To overcome this limitation, this paper proposes a physics-informed deblurring method. Firstly, to circumvent the challenge of acquiring ideal reference images in real scenarios, we developed a high-fidelity physical forward model to generate a high-quality paired dataset with the help of precise measurement of the focal spot’s 2D intensity distribution by the circular hole edge response backprojection method. Secondly, to learn the inverse mapping from blurred projections to ideal projections, we designed an Enhanced Phase U-Net (EPU-Net) deep learning network, which contains an innovative Eulerian Phase Unit (EPU) module. This module transforms feature maps into the Fourier domain, leveraging the high-fidelity structural information carried by the phase spectrum. Through a phase-attention-driven mechanism, it guides and rectifies the amplitude spectrum information corrupted during blurring. This mechanism enables the network to accurately restore the high-frequency components crucial to image details. Both simulated and physical experiments illustrate that EPU-Net outperforms state-of-the-art algorithms such as RCAN and CMU-Net in terms of Peak Signal-to-Noise Ratio and Feature Similarity. More importantly, in visual quality assessments, EPU-Net successfully restored fine structures indistinguishable by other methods, demonstrating exceptional deblurring performance and robust generalization capability. This study presents a novel approach combining physics-model-driven data generation and deep network-based inverse solution learning to enhance image quality in high-resolution CT systems.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103291"},"PeriodicalIF":3.4,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The influence of graphical effects of touch buttons on the visual usability and driving safety of in-vehicle information systems 触摸按钮的图形效果对车载信息系统视觉可用性和驾驶安全性的影响
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-21 DOI: 10.1016/j.displa.2025.103294
Yuanyang Zuo , Jun Ma , Lijuan Zhou , Zhipeng Hu , Yi Song , Yupeng Wang
Touch screen has become the main interface for drivers to complete secondary tasks in in-vehicle information systems (IVIS), and the interactive input form of clicking touch buttons is the most commonly used interaction behavior in IVIS. However, the recognition and operation of touch buttons will increase the driver’s workload and cause driving distraction, which will affect driving safety. This study aims to reduce driving distraction and improve driving safety and driving experience by designing various touch buttons to improve visual search efficiency and interaction performance. First, we designed 15 schemes for touch buttons, based on a previous theoretical summary and effect screening. Then, using simulated driving, eye-tracking measurement, and user questionnaires, we obtained the data for four evaluation indicators: task, physiological, driving performance, and subjective questionnaire. Finally, the entropy weight method was adopted to evaluate the design comprehensively. The results indicate that touch buttons with dynamic effects of color change, color projection, circle shape, negative polarity, and boundary exhibit better visual usability in the secondary tasks. The proposed scheme in this paper provides suggestions on the visual usability of the touch button design of an automotive intelligent cabin, which is conducive to improving driving safety, task efficiency, and user experience.
触摸屏已经成为驾驶员在车载信息系统(IVIS)中完成次要任务的主要界面,点击触摸按钮的交互输入形式是IVIS中最常用的交互行为。但是,触摸按钮的识别和操作会增加驾驶员的工作量,造成驾驶分心,影响驾驶安全。本研究旨在通过设计各种触摸按钮,提高视觉搜索效率和交互性能,减少驾驶分心,提高驾驶安全性和驾驶体验。首先,在之前理论总结和效果筛选的基础上,我们设计了15种触控按钮方案。然后,采用模拟驾驶、眼动测量、用户问卷等方法,获得任务、生理、驾驶性能、主观问卷四个评价指标的数据。最后,采用熵权法对设计方案进行综合评价。结果表明,具有颜色变化、颜色投影、圆形形状、负极性和边界等动态效果的触摸按钮在次要任务中具有更好的视觉可用性。本文提出的方案为汽车智能座舱触摸按钮设计的视觉可用性提供了建议,有利于提高驾驶安全性、任务效率和用户体验。
{"title":"The influence of graphical effects of touch buttons on the visual usability and driving safety of in-vehicle information systems","authors":"Yuanyang Zuo ,&nbsp;Jun Ma ,&nbsp;Lijuan Zhou ,&nbsp;Zhipeng Hu ,&nbsp;Yi Song ,&nbsp;Yupeng Wang","doi":"10.1016/j.displa.2025.103294","DOIUrl":"10.1016/j.displa.2025.103294","url":null,"abstract":"<div><div>Touch screen has become the main interface for drivers to complete secondary tasks in in-vehicle information systems (IVIS), and the interactive input form of clicking touch buttons is the most commonly used interaction behavior in IVIS. However, the recognition and operation of touch buttons will increase the driver’s workload and cause driving distraction, which will affect driving safety. This study aims to reduce driving distraction and improve driving safety and driving experience by designing various touch buttons to improve visual search efficiency and interaction performance. First, we designed 15 schemes for touch buttons, based on a previous theoretical summary and effect screening. Then, using simulated driving, eye-tracking measurement, and user questionnaires, we obtained the data for four evaluation indicators: task, physiological, driving performance, and subjective questionnaire. Finally, the entropy weight method was adopted to evaluate the design comprehensively. The results indicate that touch buttons with dynamic effects of color change, color projection, circle shape, negative polarity, and boundary exhibit better visual usability in the secondary tasks. The proposed scheme in this paper provides suggestions on the visual usability of the touch button design of an automotive intelligent cabin, which is conducive to improving driving safety, task efficiency, and user experience.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103294"},"PeriodicalIF":3.4,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and evaluation of Avatar: An ultra-low-latency immersive human–machine interface for teleoperation Avatar的设计与评估:用于远程操作的超低延迟沉浸式人机界面
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-19 DOI: 10.1016/j.displa.2025.103292
Junjie Li , Dewei Han , Jian Xu , Kang Li , Zhaoyuan Ma
Spatially separated teleoperation is crucial for inaccessible or hazardous scenarios but requires intuitive human–machine interfaces (HMIs) to ensure situational awareness, especially visual perception. While 360°panoramic vision offers immersion and a wide field of view, its high latency reduces efficiency and quality and causes motion sickness. This paper presents the Avatar system, an ultra-low-latency panoramic vision platform for teleoperation and telepresence. Using a convenient method, Avatar’s measured capture-to-display latency is only 220 ms. Two experiments with 43 participants demonstrated that Avatar achieves near-scene perception efficiency in near-field visual search. Its ultra-low latency also ensured high efficiency and quality in teleoperation tasks. Analysis of subjective questionnaires and physiological indicators confirmed that Avatar provides operators with intense immersion and presence. The system’s design and verification guide future universal, efficient HMI development for diverse applications.
空间分离远程操作对于难以接近或危险的场景至关重要,但需要直观的人机界面(hmi)来确保态势感知,特别是视觉感知。虽然360°全景视觉提供沉浸感和广阔的视野,但其高延迟降低了效率和质量,并导致晕动病。本文介绍了Avatar系统,一个用于远程操作和远程呈现的超低延迟全景视觉平台。使用一种方便的方法,Avatar的捕获到显示延迟仅为220毫秒。两个43人参与的实验表明,Avatar在近场视觉搜索中达到了近场景感知效率。它的超低延迟也保证了远程操作任务的高效率和高质量。主观问卷调查和生理指标分析证实,《阿凡达》为操作者提供了强烈的沉浸感和身临其境感。该系统的设计和验证指导了未来通用、高效的各种应用的人机界面开发。
{"title":"Design and evaluation of Avatar: An ultra-low-latency immersive human–machine interface for teleoperation","authors":"Junjie Li ,&nbsp;Dewei Han ,&nbsp;Jian Xu ,&nbsp;Kang Li ,&nbsp;Zhaoyuan Ma","doi":"10.1016/j.displa.2025.103292","DOIUrl":"10.1016/j.displa.2025.103292","url":null,"abstract":"<div><div>Spatially separated teleoperation is crucial for inaccessible or hazardous scenarios but requires intuitive human–machine interfaces (HMIs) to ensure situational awareness, especially visual perception. While 360°panoramic vision offers immersion and a wide field of view, its high latency reduces efficiency and quality and causes motion sickness. This paper presents the Avatar system, an ultra-low-latency panoramic vision platform for teleoperation and telepresence. Using a convenient method, Avatar’s measured capture-to-display latency is only 220 ms. Two experiments with 43 participants demonstrated that Avatar achieves near-scene perception efficiency in near-field visual search. Its ultra-low latency also ensured high efficiency and quality in teleoperation tasks. Analysis of subjective questionnaires and physiological indicators confirmed that Avatar provides operators with intense immersion and presence. The system’s design and verification guide future universal, efficient HMI development for diverse applications.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103292"},"PeriodicalIF":3.4,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An adaptive U-Net framework for dermatological lesion segmentation 皮肤病变分割的自适应U-Net框架
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-17 DOI: 10.1016/j.displa.2025.103290
Ru Huang , Zhimin Qian , Zhengbing Zhou , Zijian Chen , Jiannan Liu , Jing Han , Shuo Zhou , Jianhua He , Xiaoli Chu
With the deep integration of information technology, medical image segmentation has become a crucial tool for dermatological image analysis. However, existing dermatological lesion segmentation methods still face numerous challenges when dealing with complex lesion regions, which result in limited segmentation accuracy. Therefore, this study presents an adaptive segmentation network that draws inspiration from U-Net’s symmetric architecture, with the goal of improving the precision and generalizability of dermatological lesion segmentation. The proposed Visual Scaled Mamba (VSM) module incorporates residual pathways and adaptive scaling factors to enhance fine-grained feature extraction and enable hierarchical representation learning. Additionally, we propose the Multi-Scaled Cross-Axial Attention (MSCA) mechanism, integrating multiscale spatial features and enhancing blurred boundary recognition through dual cross-axial attention. Furthermore, we design an Adaptive Wave-Dilated Bottleneck (AWDB), employing adaptive dilated convolutions and wavelet transforms to improve feature representation and long-range dependency modeling. Through experimental results on the ISIC 2016, ISIC 2018, and PH2 public datasets show that our network achieves a good compromise between model complexity and segmentation accuracy, leading to considerable performance increases in dermatological image segmentation.
随着信息技术的深度融合,医学图像分割已成为皮肤学图像分析的重要工具。然而,现有的皮肤病变分割方法在处理复杂的病变区域时仍然面临诸多挑战,导致分割精度有限。因此,本研究从U-Net的对称架构中汲取灵感,提出了一种自适应分割网络,旨在提高皮肤病变分割的精度和泛化性。提出的可视化缩放曼巴(VSM)模块结合残差路径和自适应缩放因子来增强细粒度特征提取和分层表示学习。此外,我们提出了多尺度跨轴注意(MSCA)机制,通过双跨轴注意整合多尺度空间特征,增强模糊边界识别。此外,我们设计了一个自适应波扩展瓶颈(AWDB),采用自适应扩展卷积和小波变换来改进特征表示和远程依赖建模。通过在ISIC 2016、ISIC 2018和PH2公共数据集上的实验结果表明,我们的网络在模型复杂性和分割精度之间取得了很好的折衷,使得皮肤病学图像分割的性能有了很大的提高。
{"title":"An adaptive U-Net framework for dermatological lesion segmentation","authors":"Ru Huang ,&nbsp;Zhimin Qian ,&nbsp;Zhengbing Zhou ,&nbsp;Zijian Chen ,&nbsp;Jiannan Liu ,&nbsp;Jing Han ,&nbsp;Shuo Zhou ,&nbsp;Jianhua He ,&nbsp;Xiaoli Chu","doi":"10.1016/j.displa.2025.103290","DOIUrl":"10.1016/j.displa.2025.103290","url":null,"abstract":"<div><div>With the deep integration of information technology, medical image segmentation has become a crucial tool for dermatological image analysis. However, existing dermatological lesion segmentation methods still face numerous challenges when dealing with complex lesion regions, which result in limited segmentation accuracy. Therefore, this study presents an adaptive segmentation network that draws inspiration from U-Net’s symmetric architecture, with the goal of improving the precision and generalizability of dermatological lesion segmentation. The proposed Visual Scaled Mamba (VSM) module incorporates residual pathways and adaptive scaling factors to enhance fine-grained feature extraction and enable hierarchical representation learning. Additionally, we propose the Multi-Scaled Cross-Axial Attention (MSCA) mechanism, integrating multiscale spatial features and enhancing blurred boundary recognition through dual cross-axial attention. Furthermore, we design an Adaptive Wave-Dilated Bottleneck (AWDB), employing adaptive dilated convolutions and wavelet transforms to improve feature representation and long-range dependency modeling. Through experimental results on the ISIC 2016, ISIC 2018, and PH2 public datasets show that our network achieves a good compromise between model complexity and segmentation accuracy, leading to considerable performance increases in dermatological image segmentation.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103290"},"PeriodicalIF":3.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Texture generation and adaptive fusion networks for image inpainting 纹理生成和自适应融合网络用于图像绘制
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-17 DOI: 10.1016/j.displa.2025.103287
Wuzhen Shi, Wu Yang, Yang Wen
Image inpainting aims to reconstruct missing regions in images with visually realistic and semantically consistent content. Existing deep learning-based methods often rely on structural priors to guide the inpainting process, but these priors provide limited information for texture recovery, leading to blurred or inconsistent details. To address this issue, we propose a Texture Generation and Adaptive Fusion Network (TGAFNet) that explicitly models texture priors to enhance high-frequency texture generation and adaptive fusion. TGAFNet consists of two branches: a main branch for coarse image generation and refinement, and a texture branch for explicit texture synthesis. The texture branch exploits both contextual cues and multi-level features from the main branch to generate sharp texture maps under the guidance of adversarial training with SN-PatchGAN. Furthermore, a Texture Patch Adaptive Fusion (TPAF) module is introduced to perform patch-to-patch matching and adaptive fusion, effectively handling cross-domain misalignment between the generated texture and coarse images. Extensive experiments on multiple benchmark datasets demonstrate that TGAFNet achieves state-of-the-art performance, generating visually realistic and fine-textured results. The findings highlight the effectiveness of explicit texture priors and adaptive fusion mechanisms for high-fidelity image inpainting, offering a promising direction for future image restoration research.
图像修复的目的是重建图像中缺失的区域,使其具有视觉逼真和语义一致的内容。现有的基于深度学习的方法通常依赖于结构先验来指导涂漆过程,但这些先验为纹理恢复提供的信息有限,导致细节模糊或不一致。为了解决这一问题,我们提出了一种纹理生成和自适应融合网络(TGAFNet),该网络明确地对纹理先验进行建模,以增强高频纹理生成和自适应融合。TGAFNet由两个分支组成:用于粗图像生成和细化的主分支和用于显式纹理合成的纹理分支。纹理分支利用上下文线索和主分支的多层次特征,在SN-PatchGAN的对抗性训练指导下生成尖锐的纹理映射。引入纹理补丁自适应融合(TPAF)模块进行补丁间匹配和自适应融合,有效处理生成的纹理与粗糙图像之间的跨域不对齐问题。在多个基准数据集上进行的大量实验表明,TGAFNet达到了最先进的性能,产生了视觉逼真和精细纹理的结果。研究结果突出了显式纹理先验和自适应融合机制在高保真图像修复中的有效性,为未来图像修复研究提供了一个有希望的方向。
{"title":"Texture generation and adaptive fusion networks for image inpainting","authors":"Wuzhen Shi,&nbsp;Wu Yang,&nbsp;Yang Wen","doi":"10.1016/j.displa.2025.103287","DOIUrl":"10.1016/j.displa.2025.103287","url":null,"abstract":"<div><div>Image inpainting aims to reconstruct missing regions in images with visually realistic and semantically consistent content. Existing deep learning-based methods often rely on structural priors to guide the inpainting process, but these priors provide limited information for texture recovery, leading to blurred or inconsistent details. To address this issue, we propose a Texture Generation and Adaptive Fusion Network (TGAFNet) that explicitly models texture priors to enhance high-frequency texture generation and adaptive fusion. TGAFNet consists of two branches: a main branch for coarse image generation and refinement, and a texture branch for explicit texture synthesis. The texture branch exploits both contextual cues and multi-level features from the main branch to generate sharp texture maps under the guidance of adversarial training with SN-PatchGAN. Furthermore, a Texture Patch Adaptive Fusion (TPAF) module is introduced to perform patch-to-patch matching and adaptive fusion, effectively handling cross-domain misalignment between the generated texture and coarse images. Extensive experiments on multiple benchmark datasets demonstrate that TGAFNet achieves state-of-the-art performance, generating visually realistic and fine-textured results. The findings highlight the effectiveness of explicit texture priors and adaptive fusion mechanisms for high-fidelity image inpainting, offering a promising direction for future image restoration research.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103287"},"PeriodicalIF":3.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Teacher–student adversarial YOLO for domain adaptive detection in traffic scenes under adverse weather 基于师生对抗YOLO的恶劣天气交通场景域自适应检测
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-17 DOI: 10.1016/j.displa.2025.103289
Xuejuan Han , Zhong Qu , Shufang Xia
The problem of difficult traffic object localization under adverse weather has not been solved due to the labor-intensive and time-consuming process of collecting and labeling large-scale data. Domain adaptive object detection (DAOD) can achieve cross-domain detection without labeling, however, most of the existing DAOD methods are based on two-stage Faster R-CNN, which needs to be improved in both accuracy and speed. We propose a DAOD method TSA-YOLO, which takes full advantage of adversarial learning and pseudo-labeling to achieve high-performance cross-domain detection for fog, rain, and low-light scenes. For the input images, we generate auxiliary domain images by CycleGAN, also design a strong and weak enhancement method to reduce the bias of the teacher and student models. Additionally, in the student self-learning module, we propose a pixel-level domain discriminator to better extract domain-invariant features, effectively narrowing the feature distribution gap between the source and target domains. In the teacher–student mutual learning module, we incorporate the mean teacher (MT) model, iteratively update parameters to generate high-quality pseudo-labels. In addition, we evaluate our method on the public datasets Foggy Cityscapes, Rain Cityscapes, and BDD100k_Dark. The results show that TSA-YOLO significantly improves detection performance. Specifically, compared with the baseline, TSA-YOLO achieves up to a 15.0% increase in [email protected] on Foggy Cityscapes and up to an 18.5% increase on Rain Cityscapes, while converging in only 50 epochs and without reducing the model’s inference speed.
由于采集和标注大规模数据的劳动强度大、耗时长,恶劣天气条件下交通目标定位困难的问题一直没有得到解决。域自适应目标检测(Domain adaptive object detection, DAOD)可以实现无需标记的跨域检测,但现有的DAOD方法大多基于两阶段Faster R-CNN,精度和速度都有待提高。我们提出了一种DAOD方法TSA-YOLO,该方法充分利用了对抗学习和伪标记的优势,实现了对雾、雨和低光场景的高性能跨域检测。对于输入图像,我们使用CycleGAN生成辅助域图像,并设计了强弱增强方法来减少教师和学生模型的偏差。此外,在学生自学习模块中,我们提出了一种像素级域鉴别器来更好地提取域不变特征,有效缩小源域和目标域之间的特征分布差距。在师生互学模块中,我们引入了均值教师模型,迭代更新参数以生成高质量的伪标签。此外,我们还在雾蒙蒙的城市景观、下雨的城市景观和BDD100k_Dark的公共数据集上对我们的方法进行了评估。结果表明,TSA-YOLO显著提高了检测性能。具体来说,与基线相比,TSA-YOLO在雾蒙蒙的城市景观上实现了15.0%的增长,在雨蒙蒙的城市景观上实现了18.5%的增长,同时只收敛了50个epoch,并且没有降低模型的推理速度。
{"title":"Teacher–student adversarial YOLO for domain adaptive detection in traffic scenes under adverse weather","authors":"Xuejuan Han ,&nbsp;Zhong Qu ,&nbsp;Shufang Xia","doi":"10.1016/j.displa.2025.103289","DOIUrl":"10.1016/j.displa.2025.103289","url":null,"abstract":"<div><div>The problem of difficult traffic object localization under adverse weather has not been solved due to the labor-intensive and time-consuming process of collecting and labeling large-scale data. Domain adaptive object detection (DAOD) can achieve cross-domain detection without labeling, however, most of the existing DAOD methods are based on two-stage Faster R-CNN, which needs to be improved in both accuracy and speed. We propose a DAOD method TSA-YOLO, which takes full advantage of adversarial learning and pseudo-labeling to achieve high-performance cross-domain detection for fog, rain, and low-light scenes. For the input images, we generate auxiliary domain images by CycleGAN, also design a strong and weak enhancement method to reduce the bias of the teacher and student models. Additionally, in the student self-learning module, we propose a pixel-level domain discriminator to better extract domain-invariant features, effectively narrowing the feature distribution gap between the source and target domains. In the teacher–student mutual learning module, we incorporate the mean teacher (MT) model, iteratively update parameters to generate high-quality pseudo-labels. In addition, we evaluate our method on the public datasets Foggy Cityscapes, Rain Cityscapes, and BDD100k_Dark. The results show that TSA-YOLO significantly improves detection performance. Specifically, compared with the baseline, TSA-YOLO achieves up to a 15.0% increase in <em>[email protected]</em> on Foggy Cityscapes and up to an 18.5% increase on Rain Cityscapes, while converging in only 50 epochs and without reducing the model’s inference speed.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103289"},"PeriodicalIF":3.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual speaker authentication via lip motions: Appearance consistency and semantic disentanglement 通过唇形运动的视觉说话人认证:外观一致性和语义解纠缠
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-17 DOI: 10.1016/j.displa.2025.103288
Dawei Luo, Dongliang Xie, Wanpeng Xie
Lip-based visual biometric technology shows significant potential for improving the security of identity authentication in human–computer interaction. However, variations in lip contours and the entanglement of dynamic and semantic features limit its performance. To tackle these challenges, we revisit the personalized characteristics in lip-motion signals and propose a lip-based authentication framework based on personalized feature modeling. Specifically, the framework adopts a “shallow 3D CNN + deep 2D CNN” architecture to extract dynamic lip appearance features during speech, and introduces an appearance consistency loss to capture spatially invariant features across frames. For dynamic features, a semantic decoupling strategy is proposed to force the model to learn lip-motion patterns that are independent of semantic content. Additionally, we design a dynamic password authentication method based on visual speech recognition (VSR) to enhance system security. In our approach, appearance and motion patterns are used for speaker verification, while VSR results are used for passphrase verification — they working jointly. Experiments on the ICSLR and GRID datasets show that our method achieves excellent performance in terms of authentication accuracy and robustness, highlighting its potential in secure human–computer interaction scenarios. The code is made publicly available at https://github.com/Davi32ML/VSALip.
基于嘴唇的视觉生物识别技术在提高人机交互中身份认证的安全性方面显示出巨大的潜力。然而,唇形的变化以及动态和语义特征的纠缠限制了其性能。为了解决这些问题,我们重新审视了嘴唇运动信号中的个性化特征,并提出了一种基于个性化特征建模的基于嘴唇的认证框架。具体而言,该框架采用“浅3D CNN +深2D CNN”架构提取语音过程中的动态唇形特征,并引入外观一致性损失来捕获跨帧的空间不变特征。对于动态特征,提出了一种语义解耦策略,迫使模型学习与语义内容无关的唇动模式。此外,我们还设计了一种基于视觉语音识别(VSR)的动态口令认证方法,以提高系统的安全性。在我们的方法中,外观和运动模式用于说话人验证,而VSR结果用于密码短语验证-它们共同工作。在ICSLR和GRID数据集上的实验表明,该方法在认证精度和鲁棒性方面取得了优异的性能,突出了其在安全人机交互场景中的潜力。该代码可在https://github.com/Davi32ML/VSALip上公开获取。
{"title":"Visual speaker authentication via lip motions: Appearance consistency and semantic disentanglement","authors":"Dawei Luo,&nbsp;Dongliang Xie,&nbsp;Wanpeng Xie","doi":"10.1016/j.displa.2025.103288","DOIUrl":"10.1016/j.displa.2025.103288","url":null,"abstract":"<div><div>Lip-based visual biometric technology shows significant potential for improving the security of identity authentication in human–computer interaction. However, variations in lip contours and the entanglement of dynamic and semantic features limit its performance. To tackle these challenges, we revisit the personalized characteristics in lip-motion signals and propose a lip-based authentication framework based on personalized feature modeling. Specifically, the framework adopts a “shallow 3D CNN + deep 2D CNN” architecture to extract dynamic lip appearance features during speech, and introduces an appearance consistency loss to capture spatially invariant features across frames. For dynamic features, a semantic decoupling strategy is proposed to force the model to learn lip-motion patterns that are independent of semantic content. Additionally, we design a dynamic password authentication method based on visual speech recognition (VSR) to enhance system security. In our approach, appearance and motion patterns are used for speaker verification, while VSR results are used for passphrase verification — they working jointly. Experiments on the ICSLR and GRID datasets show that our method achieves excellent performance in terms of authentication accuracy and robustness, highlighting its potential in secure human–computer interaction scenarios. The code is made publicly available at <span><span>https://github.com/Davi32ML/VSALip</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103288"},"PeriodicalIF":3.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Progressive multi-level learning for gloss-free sign language translation 渐进式多层次无光泽手语翻译学习
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-14 DOI: 10.1016/j.displa.2025.103285
Yingchun Xie , Wei Su , Shukai Chen , Jinzhao Wu , Chuan Cai , Yongna Yuan
Gloss-free sign language translation is a key focus in sign language translation research, enabling effective communication between the deaf and the hearing individuals in a broader and more universal manner. In this work, we propose a Progressive Multi-Level Learning model for sign language translation (PML-SLT), which progressively learns sign representations to improve video understanding. Rather than requiring every frame to attend to all other frames during attention computation, our approach introduces a progressive perceptual field expansion mechanism that gradually broadens the attention scope across video frames. This mechanism continuously expands the perceptual field between frames, effectively capturing both local and global information. Besides, to fully exploit multi-granularity information, we employ a multi-level feature integration scheme that transfers the output of each encoder layer to the corresponding decoder layer, enabling comprehensive utilization of hierarchical temporal features. Additionally, we introduce a multi-modal triplet loss to harmonize semantic information across modalities, aligning the text space with the video space so that the video features acquire richer semantic meaning. Experimental results on two public datasets demonstrate the promising translation performance of the proposed PML-SLT model.
无光泽手语翻译是手语翻译研究的一个重点,它使聋人与听人之间的有效交流更广泛、更普遍。在这项工作中,我们提出了一个渐进的多层次学习模型,用于手语翻译(PML-SLT),该模型逐步学习手势表示以提高视频理解能力。在注意力计算过程中,我们的方法不是要求每一帧都关注所有其他帧,而是引入了一种渐进的感知场扩展机制,逐渐拓宽了视频帧之间的注意力范围。该机制不断扩展帧间的感知域,有效地捕获局部和全局信息。此外,为了充分利用多粒度信息,我们采用了多层次特征集成方案,将每个编码器层的输出传输到相应的解码器层,从而能够综合利用分层时间特征。此外,我们引入了一种多模态三联体损失来协调跨模态的语义信息,使文本空间与视频空间对齐,从而使视频特征获得更丰富的语义含义。在两个公共数据集上的实验结果表明,所提出的PML-SLT模型具有良好的翻译性能。
{"title":"Progressive multi-level learning for gloss-free sign language translation","authors":"Yingchun Xie ,&nbsp;Wei Su ,&nbsp;Shukai Chen ,&nbsp;Jinzhao Wu ,&nbsp;Chuan Cai ,&nbsp;Yongna Yuan","doi":"10.1016/j.displa.2025.103285","DOIUrl":"10.1016/j.displa.2025.103285","url":null,"abstract":"<div><div>Gloss-free sign language translation is a key focus in sign language translation research, enabling effective communication between the deaf and the hearing individuals in a broader and more universal manner. In this work, we propose a Progressive Multi-Level Learning model for sign language translation (PML-SLT), which progressively learns sign representations to improve video understanding. Rather than requiring every frame to attend to all other frames during attention computation, our approach introduces a progressive perceptual field expansion mechanism that gradually broadens the attention scope across video frames. This mechanism continuously expands the perceptual field between frames, effectively capturing both local and global information. Besides, to fully exploit multi-granularity information, we employ a multi-level feature integration scheme that transfers the output of each encoder layer to the corresponding decoder layer, enabling comprehensive utilization of hierarchical temporal features. Additionally, we introduce a multi-modal triplet loss to harmonize semantic information across modalities, aligning the text space with the video space so that the video features acquire richer semantic meaning. Experimental results on two public datasets demonstrate the promising translation performance of the proposed PML-SLT model.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103285"},"PeriodicalIF":3.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bioinspired micro-/nano-composite structures for simultaneous enhancement of light extraction efficiency and output uniformity in Micro-LEDs 生物启发微/纳米复合结构,同时提高光提取效率和输出均匀性
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-13 DOI: 10.1016/j.displa.2025.103286
Jingyu Liu , Jiawei Zhang , Zhenyou Zou , Yibin Lin , Jinyu Ye , Wenfu Huang , Chaoxing Wu , Yongai Zhang , Jie Sun , Qun Yan , Xiongtu Zhou
The strong total internal reflection (TIR) in micro light-emitting diodes (Micro-LEDs) significantly limits light extraction efficiency (LEE) and uniformity of light distribution, thereby hindering their industrial applications. Inspired by the layered surface structures found in firefly lanterns, this study proposes a flexible bioinspired micro-/nano-composite structure that effectively enhances both LEE and the uniformity of light output. Finite-Difference Time-Domain (FDTD) simulations demonstrate that microstructures contribute to directional light extraction, whereas nanostructures facilitate overall optical optimization. A novel fabrication approach integrating grayscale photolithography, mechanical stretching, and plasma treatment was developed, enabling the realization of micro-/nano-composite structures with tunable design parameters. Experimental results indicate a 40.5% increase in external quantum efficiency (EQE) and a 41.6% improvement in power efficiency (PE) for blue Micro-LEDs, accompanied by enhanced angular light distribution, leading to wider viewing angles and near-ideal light uniformity. This advancement effectively resolves the longstanding challenge of balancing efficiency and uniformity in light extraction, thereby facilitating the industrialization of Micro-LED technology.
微发光二极管(micro - led)的强全内反射(TIR)严重限制了光提取效率(LEE)和光分布均匀性,从而阻碍了其工业应用。受萤火虫灯笼中发现的分层表面结构的启发,本研究提出了一种灵活的仿生微/纳米复合结构,有效地提高了LEE和光输出的均匀性。时域有限差分(FDTD)模拟表明,微结构有助于定向光提取,而纳米结构有助于整体光学优化。开发了一种集成灰度光刻、机械拉伸和等离子体处理的新型制造方法,实现了具有可调设计参数的微/纳米复合材料结构。实验结果表明,蓝色micro - led的外量子效率(EQE)提高了40.5%,功率效率(PE)提高了41.6%,同时增强了角光分布,从而实现了更宽的视角和接近理想的光均匀性。这一进步有效地解决了平衡光提取效率和均匀性的长期挑战,从而促进了Micro-LED技术的产业化。
{"title":"Bioinspired micro-/nano-composite structures for simultaneous enhancement of light extraction efficiency and output uniformity in Micro-LEDs","authors":"Jingyu Liu ,&nbsp;Jiawei Zhang ,&nbsp;Zhenyou Zou ,&nbsp;Yibin Lin ,&nbsp;Jinyu Ye ,&nbsp;Wenfu Huang ,&nbsp;Chaoxing Wu ,&nbsp;Yongai Zhang ,&nbsp;Jie Sun ,&nbsp;Qun Yan ,&nbsp;Xiongtu Zhou","doi":"10.1016/j.displa.2025.103286","DOIUrl":"10.1016/j.displa.2025.103286","url":null,"abstract":"<div><div>The strong total internal reflection (TIR) in micro light-emitting diodes (Micro-LEDs) significantly limits light extraction efficiency (LEE) and uniformity of light distribution, thereby hindering their industrial applications. Inspired by the layered surface structures found in firefly lanterns, this study proposes a flexible bioinspired micro-/nano-composite structure that effectively enhances both LEE and the uniformity of light output. Finite-Difference Time-Domain (FDTD) simulations demonstrate that microstructures contribute to directional light extraction, whereas nanostructures facilitate overall optical optimization. A novel fabrication approach integrating grayscale photolithography, mechanical stretching, and plasma treatment was developed, enabling the realization of micro-/nano-composite structures with tunable design parameters. Experimental results indicate a 40.5% increase in external quantum efficiency (EQE) and a 41.6% improvement in power efficiency (PE) for blue Micro-LEDs, accompanied by enhanced angular light distribution, leading to wider viewing angles and near-ideal light uniformity. This advancement effectively resolves the longstanding challenge of balancing efficiency and uniformity in light extraction, thereby facilitating the industrialization of Micro-LED technology.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103286"},"PeriodicalIF":3.4,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1