首页 > 最新文献

Displays最新文献

英文 中文
Omnidirectional image quality assessment via multi-perceptual feature fusion 基于多感知特征融合的全方位图像质量评价
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-26 DOI: 10.1016/j.displa.2025.103302
Cheng Zhang , Shucun Si , Bo Zhang , Jiaying Wang
Omnidirectional images are integral to virtual reality (VR) applications. They present unique challenges for quality assessment. This is due to their high resolution and spatial complexity. Current omnidirectional image quality assessment (OIQA) techniques still struggle to extract multi-perceptual features and create interrelationships across consecutive viewports, which makes it difficult to replicate the subjective perception of the human eye. In response, this research proposes a multi-perceptual feature aggregation-based omnidirectional image quality assessment approach. The method creates a pseudo-temporal input by transforming the equirectangular projection (ERP) omnidirectional image into a series of viewports, simulating the user’s multi-viewport browsing journey. To improve frequency domain feature extraction capabilities, the backbone network combines a convolutional neural network with 2D wavelet transform convolution (WTConv). This module allows signal decomposition in the frequency domain while maintaining spatial information, which makes it easier to identify high-frequency features and structural defects in pictures. To better capture the continuous relationship between viewports, a temporal shift module (TSM) is added, which dynamically shifts the viewport features in the channel dimension, thereby improving the model’s perception of the continuity and spatial consistency of viewpoints. Additionally, the model incorporates the self-channel attention (SCA) mechanism to merge various perceptual characteristics and amplify salient feature expression to further improve the perceptual ability of important distortion regions. Experiments are conducted on the OIQA and CVIQD standard datasets, and the results show that our proposed models achieve excellent performance compared to existing full-reference and no-reference methods.
全向图像是虚拟现实(VR)应用中不可或缺的一部分。它们对质量评估提出了独特的挑战。这是由于它们的高分辨率和空间复杂性。目前的全向图像质量评估(OIQA)技术仍然在努力提取多感知特征,并在连续的视口之间创建相互关系,这使得很难复制人眼的主观感知。为此,本研究提出了一种基于多感知特征聚合的全方位图像质量评估方法。该方法通过将等矩形投影(ERP)全向图像转换为一系列视口,模拟用户的多视口浏览过程,从而产生伪时间输入。为了提高频域特征提取能力,骨干网将卷积神经网络与二维小波变换卷积(WTConv)相结合。该模块在保持空间信息的同时,在频域进行信号分解,便于识别图像中的高频特征和结构缺陷。为了更好地捕捉视口之间的连续关系,增加了时间位移模块(TSM),该模块在通道维度上动态移动视口特征,从而提高模型对视点连续性和空间一致性的感知。此外,该模型还引入了自通道注意(SCA)机制,融合各种感知特征,放大显著特征表达,进一步提高重要失真区域的感知能力。在OIQA和CVIQD标准数据集上进行了实验,结果表明,与现有的全参考和无参考方法相比,我们提出的模型具有优异的性能。
{"title":"Omnidirectional image quality assessment via multi-perceptual feature fusion","authors":"Cheng Zhang ,&nbsp;Shucun Si ,&nbsp;Bo Zhang ,&nbsp;Jiaying Wang","doi":"10.1016/j.displa.2025.103302","DOIUrl":"10.1016/j.displa.2025.103302","url":null,"abstract":"<div><div>Omnidirectional images are integral to virtual reality (VR) applications. They present unique challenges for quality assessment. This is due to their high resolution and spatial complexity. Current omnidirectional image quality assessment (OIQA) techniques still struggle to extract multi-perceptual features and create interrelationships across consecutive viewports, which makes it difficult to replicate the subjective perception of the human eye. In response, this research proposes a multi-perceptual feature aggregation-based omnidirectional image quality assessment approach. The method creates a pseudo-temporal input by transforming the equirectangular projection (ERP) omnidirectional image into a series of viewports, simulating the user’s multi-viewport browsing journey. To improve frequency domain feature extraction capabilities, the backbone network combines a convolutional neural network with 2D wavelet transform convolution (WTConv). This module allows signal decomposition in the frequency domain while maintaining spatial information, which makes it easier to identify high-frequency features and structural defects in pictures. To better capture the continuous relationship between viewports, a temporal shift module (TSM) is added, which dynamically shifts the viewport features in the channel dimension, thereby improving the model’s perception of the continuity and spatial consistency of viewpoints. Additionally, the model incorporates the self-channel attention (SCA) mechanism to merge various perceptual characteristics and amplify salient feature expression to further improve the perceptual ability of important distortion regions. Experiments are conducted on the OIQA and CVIQD standard datasets, and the results show that our proposed models achieve excellent performance compared to existing full-reference and no-reference methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103302"},"PeriodicalIF":3.4,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D human pose estimation-based action recognition method for complex industrial scenarios 基于三维人体姿态估计的复杂工业场景动作识别方法
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-24 DOI: 10.1016/j.displa.2025.103298
Zehui Zhang , Junjie Kong , Hanfeng Liu , Haibin Shao , Cong Guan , Hao Li , Xiaobin Xu
Most industrial safety accidents (death or injury) are attributed to worker unsafe actions. In these industrial scenarios, traditional monitoring methods are highly inefficient and costly. In particular, current computer vision studies struggle to accurately identify worker actions in occluding scenarios. To address this challenge, this paper proposes 3D human pose estimation-based action recognition method for complex industrial scenarios, which consists of the pose estimation model, 3D reconstruction model and graph convolutional model. The pose estimation model extract 2D pose data from video clips, and then 3D reconstruction model uses the 2D pose data to produce 3D pose data. Graph convolutional model is used to classify the 3D pose data for action recognition. To evaluate the performance of the proposed method, the public and industrial action dataset are used for validation. The experimental results achieve an accuracy of 97.81%, which demonstrates the method enables more precise and reliable recognition in complex industrial settings.
大多数工业安全事故(死亡或伤害)归因于工人的不安全行为。在这些工业场景中,传统的监控方法效率低下且成本高昂。特别是,当前的计算机视觉研究难以准确识别遮挡场景中的工人动作。针对这一挑战,本文提出了基于三维人体姿态估计的复杂工业场景动作识别方法,该方法由姿态估计模型、三维重建模型和图卷积模型组成。姿态估计模型从视频片段中提取二维姿态数据,三维重建模型利用二维姿态数据生成三维姿态数据。采用图卷积模型对三维姿态数据进行分类,用于动作识别。为了评估所提出方法的性能,使用公共和工业行动数据集进行验证。实验结果表明,该方法在复杂的工业环境中具有更高的识别精度和可靠性。
{"title":"3D human pose estimation-based action recognition method for complex industrial scenarios","authors":"Zehui Zhang ,&nbsp;Junjie Kong ,&nbsp;Hanfeng Liu ,&nbsp;Haibin Shao ,&nbsp;Cong Guan ,&nbsp;Hao Li ,&nbsp;Xiaobin Xu","doi":"10.1016/j.displa.2025.103298","DOIUrl":"10.1016/j.displa.2025.103298","url":null,"abstract":"<div><div>Most industrial safety accidents (death or injury) are attributed to worker unsafe actions. In these industrial scenarios, traditional monitoring methods are highly inefficient and costly. In particular, current computer vision studies struggle to accurately identify worker actions in occluding scenarios. To address this challenge, this paper proposes 3D human pose estimation-based action recognition method for complex industrial scenarios, which consists of the pose estimation model, 3D reconstruction model and graph convolutional model. The pose estimation model extract 2D pose data from video clips, and then 3D reconstruction model uses the 2D pose data to produce 3D pose data. Graph convolutional model is used to classify the 3D pose data for action recognition. To evaluate the performance of the proposed method, the public and industrial action dataset are used for validation. The experimental results achieve an accuracy of 97.81%, which demonstrates the method enables more precise and reliable recognition in complex industrial settings.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103298"},"PeriodicalIF":3.4,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AMAN: Attention-Modulated Adversarial Network for blind mural image completion 用于盲人壁画图像补全的注意调节对抗网络
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-24 DOI: 10.1016/j.displa.2025.103296
Shanzhen Lan , Shuaihui Zhang , Hanwen Zhong , Ruotong Li , Yongqin Zhang
Ancient murals, valuable historical artifacts, often undergo deterioration, such as missing parts and pigment shedding. Virtual restoration techniques have notably improved the quality of mural images and prevented further damage. Nevertheless, these methods typically rely on prior knowledge of the damaged areas’ locations. In this paper, we present a novel Attention-Modulated Adversarial Network (AMAN) for blind image completion of damaged murals. AMAN consists of two key components: damage detection and hole inpainting. During the damage detection phase, the system leverages multi-path attention and quadtree-structured transformer modules to estimate binary masks of damaged regions, adopting a coarse-to-fine strategy. After obtaining the masks, the hole inpainting stage utilizes a co-constrained generation module with group-gated convolution to restore the damaged areas. We implemented and validated AMAN on benchmark datasets. Comprehensive experiments show that AMAN generates more realistic results with fewer artifacts, significantly surpassing baseline methods in both quantitative and qualitative assessments. Specifically, the proposed algorithm achieves a 12% improvement in FID score compared to competing approaches and outperforms them by 2.1 dB in PSNR, enabling more accurate restoration of mural structures and details. Our code and experimental results are publicly accessible at https://github.com/zhwxdx/AMAN.
古代壁画,珍贵的历史文物,往往会出现退化,如缺失的部分和色素脱落。虚拟修复技术显著提高了壁画的质量,防止了进一步的破坏。然而,这些方法通常依赖于对受损区域位置的先验知识。在本文中,我们提出了一种新的注意力调节对抗网络(AMAN),用于盲补受损壁画的图像。AMAN由两个关键部分组成:损伤检测和孔洞喷涂。在损伤检测阶段,系统利用多路径关注和四叉树结构的变压器模块来估计损伤区域的二值掩模,采用由粗到精的策略。在获得掩模后,补孔阶段使用具有组门控卷积的共约束生成模块来恢复损坏区域。我们在基准数据集上实现并验证了AMAN。综合实验表明,AMAN产生了更真实的结果,伪影更少,在定量和定性评估方面都大大超过了基线方法。具体来说,与竞争对手的方法相比,该算法的FID评分提高了12%,PSNR比它们高2.1 dB,能够更准确地恢复壁画结构和细节。我们的代码和实验结果可以在https://github.com/zhwxdx/AMAN上公开访问。
{"title":"AMAN: Attention-Modulated Adversarial Network for blind mural image completion","authors":"Shanzhen Lan ,&nbsp;Shuaihui Zhang ,&nbsp;Hanwen Zhong ,&nbsp;Ruotong Li ,&nbsp;Yongqin Zhang","doi":"10.1016/j.displa.2025.103296","DOIUrl":"10.1016/j.displa.2025.103296","url":null,"abstract":"<div><div>Ancient murals, valuable historical artifacts, often undergo deterioration, such as missing parts and pigment shedding. Virtual restoration techniques have notably improved the quality of mural images and prevented further damage. Nevertheless, these methods typically rely on prior knowledge of the damaged areas’ locations. In this paper, we present a novel <strong>A</strong>ttention-<strong>M</strong>odulated <strong>A</strong>dversarial <strong>N</strong>etwork (AMAN) for blind image completion of damaged murals. AMAN consists of two key components: damage detection and hole inpainting. During the damage detection phase, the system leverages multi-path attention and quadtree-structured transformer modules to estimate binary masks of damaged regions, adopting a coarse-to-fine strategy. After obtaining the masks, the hole inpainting stage utilizes a co-constrained generation module with group-gated convolution to restore the damaged areas. We implemented and validated AMAN on benchmark datasets. Comprehensive experiments show that AMAN generates more realistic results with fewer artifacts, significantly surpassing baseline methods in both quantitative and qualitative assessments. Specifically, the proposed algorithm achieves a 12% improvement in FID score compared to competing approaches and outperforms them by 2.1 dB in PSNR, enabling more accurate restoration of mural structures and details. Our code and experimental results are publicly accessible at <span><span>https://github.com/zhwxdx/AMAN</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103296"},"PeriodicalIF":3.4,"publicationDate":"2025-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image processing techniques for viewpoint correction and resolution enhancement in light field 3D displays 光场三维显示器视点校正和分辨率增强的图像处理技术
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-22 DOI: 10.1016/j.displa.2025.103295
Wonseok Son, Youngrok Kim, Sung-Wook Min
Light field 3D displays provide depth and motion parallax but suffer from reduced spatial resolution due to pixel sharing among viewpoints. We present two image-based techniques to improve visual quality without hardware changes. The first adjusts viewpoint convergence through interpolation-based resizing of elemental images, aligning the viewing zone and reducing crosstalk. The second applies super‑sampling anti‑aliasing (SSAA) to suppress aliasing and enhance detail. Experiments show a PSNR increase of 2.21 dB and a 32 % SSIM improvement, with the contrast transfer function remaining above the 0.3 visibility threshold for higher spatial frequencies. These results demonstrate that simple image processing can improve reconstructed 3D image quality and flexibility without additional optical complexity.
光场3D显示器提供深度和运动视差,但由于视点之间的像素共享而导致空间分辨率降低。我们提出了两种基于图像的技术,在不改变硬件的情况下提高视觉质量。第一种方法通过基于插值的元素图像大小调整来调整视点收敛性,对齐观看区域并减少串扰。第二种是采用超采样抗混叠(SSAA)来抑制混叠和增强细节。实验表明,PSNR提高了2.21 dB, SSIM提高了32%,在更高的空间频率下,对比度传递函数保持在0.3可见阈值以上。这些结果表明,简单的图像处理可以在不增加光学复杂性的情况下提高重建三维图像的质量和灵活性。
{"title":"Image processing techniques for viewpoint correction and resolution enhancement in light field 3D displays","authors":"Wonseok Son,&nbsp;Youngrok Kim,&nbsp;Sung-Wook Min","doi":"10.1016/j.displa.2025.103295","DOIUrl":"10.1016/j.displa.2025.103295","url":null,"abstract":"<div><div>Light field 3D displays provide depth and motion parallax but suffer from reduced spatial resolution due to pixel sharing among viewpoints. We present two image-based techniques to improve visual quality without hardware changes. The first adjusts viewpoint convergence through interpolation-based resizing of elemental images, aligning the viewing zone and reducing crosstalk. The second applies super‑sampling anti‑aliasing (SSAA) to suppress aliasing and enhance detail. Experiments show a PSNR increase of 2.21 dB and a 32 % SSIM improvement, with the contrast transfer function remaining above the 0.3 visibility threshold for higher spatial frequencies. These results demonstrate that simple image processing can improve reconstructed 3D image quality and flexibility without additional optical complexity.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103295"},"PeriodicalIF":3.4,"publicationDate":"2025-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge-guided interactive fusion of texture and geometric features for Dunhuang mural image inpainting 边缘引导下敦煌壁画图像纹理与几何特征的交互融合
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-22 DOI: 10.1016/j.displa.2025.103297
Rui Tian , Tongchen Wu , Dandan Feng , Zihao Xin , Lulu Wang
To tackle the challenges posed by geometric distortion and texture inconsistency in Dunhuang mural inpainting, this paper proposes an Edge-Guided Interactive Fusion of Texture and Geometric Features (EGIF-Net) for progressive image inpainting. This method integrates texture inpainting with geometric feature reconstruction by adopting a three-stage progressive strategy that effectively leverages both local details and global structural information within the image. In the first stage, edge information is extracted via the Parallel Downsampling Edge and Mask (PDEM) Module to facilitate the reconstruction of damaged geometric structures. The second stage employs the Deformable Interactive Attention Transformer (DIA-Transformer) module to refine local details. In the third stage, global inpainting is achieved through the Hierarchical Normalization-based Multi-scale Fusion (HNMF) module, which preserves both the overall image consistency and the fidelity of detailed reconstruction. Experimental results on Dunhuang mural images across multiple resolutions, as well as the CelebA-HQ, Places2, and Paris StreetView datasets, demonstrate that the proposed method outperforms existing approaches in both subjective evaluations and objective metrics, such as the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM). EGIF-Net demonstrates exceptional performance in handling complex textures and intricate geometric structures, showcasing superior robustness and generalization compared to current inpainting techniques, particularly for large-scale, damaged regions.
针对敦煌壁画绘制中存在的几何失真和纹理不一致问题,提出了一种边缘导向的纹理与几何特征交互融合(EGIF-Net)算法。该方法采用三阶段递进策略,有效地利用图像中的局部细节和全局结构信息,将纹理绘制与几何特征重建相结合。在第一阶段,通过并行下采样边缘和掩模(PDEM)模块提取边缘信息,以便于重建损坏的几何结构。第二阶段采用可变形交互式注意力转换器(DIA-Transformer)模块来细化局部细节。第三阶段,通过基于层次归一化的多尺度融合(HNMF)模块实现全局融合,既保持了图像整体的一致性,又保持了细节重建的保真度。在不同分辨率的敦煌壁画图像以及CelebA-HQ、Places2和巴黎街景数据集上的实验结果表明,该方法在主观评价和客观指标(如峰值信噪比(PSNR)和结构相似性指数(SSIM))方面都优于现有方法。EGIF-Net在处理复杂纹理和复杂几何结构方面表现出卓越的性能,与当前的喷漆技术相比,表现出卓越的鲁棒性和泛化性,特别是对于大规模的受损区域。
{"title":"Edge-guided interactive fusion of texture and geometric features for Dunhuang mural image inpainting","authors":"Rui Tian ,&nbsp;Tongchen Wu ,&nbsp;Dandan Feng ,&nbsp;Zihao Xin ,&nbsp;Lulu Wang","doi":"10.1016/j.displa.2025.103297","DOIUrl":"10.1016/j.displa.2025.103297","url":null,"abstract":"<div><div>To tackle the challenges posed by geometric distortion and texture inconsistency in Dunhuang mural inpainting, this paper proposes an Edge-Guided Interactive Fusion of Texture and Geometric Features (EGIF-Net) for progressive image inpainting. This method integrates texture inpainting with geometric feature reconstruction by adopting a three-stage progressive strategy that effectively leverages both local details and global structural information within the image. In the first stage, edge information is extracted via the Parallel Downsampling Edge and Mask (PDEM) Module to facilitate the reconstruction of damaged geometric structures. The second stage employs the Deformable Interactive Attention Transformer (DIA-Transformer) module to refine local details. In the third stage, global inpainting is achieved through the Hierarchical Normalization-based Multi-scale Fusion (HNMF) module, which preserves both the overall image consistency and the fidelity of detailed reconstruction. Experimental results on Dunhuang mural images across multiple resolutions, as well as the CelebA-HQ, Places2, and Paris StreetView datasets, demonstrate that the proposed method outperforms existing approaches in both subjective evaluations and objective metrics, such as the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM). EGIF-Net demonstrates exceptional performance in handling complex textures and intricate geometric structures, showcasing superior robustness and generalization compared to current inpainting techniques, particularly for large-scale, damaged regions.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103297"},"PeriodicalIF":3.4,"publicationDate":"2025-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on CT image deblurring method based on focal spot intensity distribution 基于焦斑强度分布的CT图像去模糊方法研究
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-21 DOI: 10.1016/j.displa.2025.103291
Fengxiao Li , Guowei Zhong , Haijun Yu , Rifeng Zhou
The finite focal spot of the X-ray source is a fundamental physical bottleneck limiting the spatial resolution of Computed Tomography (CT), as its penumbra blurring severely degrades image detail discernibility. To overcome this limitation, this paper proposes a physics-informed deblurring method. Firstly, to circumvent the challenge of acquiring ideal reference images in real scenarios, we developed a high-fidelity physical forward model to generate a high-quality paired dataset with the help of precise measurement of the focal spot’s 2D intensity distribution by the circular hole edge response backprojection method. Secondly, to learn the inverse mapping from blurred projections to ideal projections, we designed an Enhanced Phase U-Net (EPU-Net) deep learning network, which contains an innovative Eulerian Phase Unit (EPU) module. This module transforms feature maps into the Fourier domain, leveraging the high-fidelity structural information carried by the phase spectrum. Through a phase-attention-driven mechanism, it guides and rectifies the amplitude spectrum information corrupted during blurring. This mechanism enables the network to accurately restore the high-frequency components crucial to image details. Both simulated and physical experiments illustrate that EPU-Net outperforms state-of-the-art algorithms such as RCAN and CMU-Net in terms of Peak Signal-to-Noise Ratio and Feature Similarity. More importantly, in visual quality assessments, EPU-Net successfully restored fine structures indistinguishable by other methods, demonstrating exceptional deblurring performance and robust generalization capability. This study presents a novel approach combining physics-model-driven data generation and deep network-based inverse solution learning to enhance image quality in high-resolution CT systems.
x射线源的有限焦斑是限制计算机断层扫描(CT)空间分辨率的一个基本物理瓶颈,因为它的半影模糊严重降低了图像细节的可分辨性。为了克服这一限制,本文提出了一种基于物理的去模糊方法。首先,为了克服在真实场景中获取理想参考图像的挑战,我们开发了高保真物理正演模型,通过圆孔边缘响应反投影法精确测量焦点光斑的二维强度分布,生成高质量的配对数据集。其次,为了学习从模糊投影到理想投影的逆映射,我们设计了一个增强相位U-Net (EPU- net)深度学习网络,该网络包含一个创新的欧拉相位单元(EPU)模块。该模块利用相位谱所携带的高保真结构信息,将特征映射转换为傅里叶域。通过相位注意驱动机制,对模糊过程中损坏的幅度谱信息进行引导和校正。这种机制使网络能够准确地恢复对图像细节至关重要的高频成分。仿真和物理实验都表明,EPU-Net在峰值信噪比和特征相似度方面优于RCAN和mcu - net等最先进的算法。更重要的是,在视觉质量评估中,EPU-Net成功地恢复了其他方法无法区分的精细结构,展示了出色的去模糊性能和强大的泛化能力。本研究提出了一种结合物理模型驱动的数据生成和基于深度网络的逆解学习的新方法,以提高高分辨率CT系统的图像质量。
{"title":"Research on CT image deblurring method based on focal spot intensity distribution","authors":"Fengxiao Li ,&nbsp;Guowei Zhong ,&nbsp;Haijun Yu ,&nbsp;Rifeng Zhou","doi":"10.1016/j.displa.2025.103291","DOIUrl":"10.1016/j.displa.2025.103291","url":null,"abstract":"<div><div>The finite focal spot of the X-ray source is a fundamental physical bottleneck limiting the spatial resolution of Computed Tomography (CT), as its penumbra blurring severely degrades image detail discernibility. To overcome this limitation, this paper proposes a physics-informed deblurring method. Firstly, to circumvent the challenge of acquiring ideal reference images in real scenarios, we developed a high-fidelity physical forward model to generate a high-quality paired dataset with the help of precise measurement of the focal spot’s 2D intensity distribution by the circular hole edge response backprojection method. Secondly, to learn the inverse mapping from blurred projections to ideal projections, we designed an Enhanced Phase U-Net (EPU-Net) deep learning network, which contains an innovative Eulerian Phase Unit (EPU) module. This module transforms feature maps into the Fourier domain, leveraging the high-fidelity structural information carried by the phase spectrum. Through a phase-attention-driven mechanism, it guides and rectifies the amplitude spectrum information corrupted during blurring. This mechanism enables the network to accurately restore the high-frequency components crucial to image details. Both simulated and physical experiments illustrate that EPU-Net outperforms state-of-the-art algorithms such as RCAN and CMU-Net in terms of Peak Signal-to-Noise Ratio and Feature Similarity. More importantly, in visual quality assessments, EPU-Net successfully restored fine structures indistinguishable by other methods, demonstrating exceptional deblurring performance and robust generalization capability. This study presents a novel approach combining physics-model-driven data generation and deep network-based inverse solution learning to enhance image quality in high-resolution CT systems.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103291"},"PeriodicalIF":3.4,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The influence of graphical effects of touch buttons on the visual usability and driving safety of in-vehicle information systems 触摸按钮的图形效果对车载信息系统视觉可用性和驾驶安全性的影响
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-21 DOI: 10.1016/j.displa.2025.103294
Yuanyang Zuo , Jun Ma , Lijuan Zhou , Zhipeng Hu , Yi Song , Yupeng Wang
Touch screen has become the main interface for drivers to complete secondary tasks in in-vehicle information systems (IVIS), and the interactive input form of clicking touch buttons is the most commonly used interaction behavior in IVIS. However, the recognition and operation of touch buttons will increase the driver’s workload and cause driving distraction, which will affect driving safety. This study aims to reduce driving distraction and improve driving safety and driving experience by designing various touch buttons to improve visual search efficiency and interaction performance. First, we designed 15 schemes for touch buttons, based on a previous theoretical summary and effect screening. Then, using simulated driving, eye-tracking measurement, and user questionnaires, we obtained the data for four evaluation indicators: task, physiological, driving performance, and subjective questionnaire. Finally, the entropy weight method was adopted to evaluate the design comprehensively. The results indicate that touch buttons with dynamic effects of color change, color projection, circle shape, negative polarity, and boundary exhibit better visual usability in the secondary tasks. The proposed scheme in this paper provides suggestions on the visual usability of the touch button design of an automotive intelligent cabin, which is conducive to improving driving safety, task efficiency, and user experience.
触摸屏已经成为驾驶员在车载信息系统(IVIS)中完成次要任务的主要界面,点击触摸按钮的交互输入形式是IVIS中最常用的交互行为。但是,触摸按钮的识别和操作会增加驾驶员的工作量,造成驾驶分心,影响驾驶安全。本研究旨在通过设计各种触摸按钮,提高视觉搜索效率和交互性能,减少驾驶分心,提高驾驶安全性和驾驶体验。首先,在之前理论总结和效果筛选的基础上,我们设计了15种触控按钮方案。然后,采用模拟驾驶、眼动测量、用户问卷等方法,获得任务、生理、驾驶性能、主观问卷四个评价指标的数据。最后,采用熵权法对设计方案进行综合评价。结果表明,具有颜色变化、颜色投影、圆形形状、负极性和边界等动态效果的触摸按钮在次要任务中具有更好的视觉可用性。本文提出的方案为汽车智能座舱触摸按钮设计的视觉可用性提供了建议,有利于提高驾驶安全性、任务效率和用户体验。
{"title":"The influence of graphical effects of touch buttons on the visual usability and driving safety of in-vehicle information systems","authors":"Yuanyang Zuo ,&nbsp;Jun Ma ,&nbsp;Lijuan Zhou ,&nbsp;Zhipeng Hu ,&nbsp;Yi Song ,&nbsp;Yupeng Wang","doi":"10.1016/j.displa.2025.103294","DOIUrl":"10.1016/j.displa.2025.103294","url":null,"abstract":"<div><div>Touch screen has become the main interface for drivers to complete secondary tasks in in-vehicle information systems (IVIS), and the interactive input form of clicking touch buttons is the most commonly used interaction behavior in IVIS. However, the recognition and operation of touch buttons will increase the driver’s workload and cause driving distraction, which will affect driving safety. This study aims to reduce driving distraction and improve driving safety and driving experience by designing various touch buttons to improve visual search efficiency and interaction performance. First, we designed 15 schemes for touch buttons, based on a previous theoretical summary and effect screening. Then, using simulated driving, eye-tracking measurement, and user questionnaires, we obtained the data for four evaluation indicators: task, physiological, driving performance, and subjective questionnaire. Finally, the entropy weight method was adopted to evaluate the design comprehensively. The results indicate that touch buttons with dynamic effects of color change, color projection, circle shape, negative polarity, and boundary exhibit better visual usability in the secondary tasks. The proposed scheme in this paper provides suggestions on the visual usability of the touch button design of an automotive intelligent cabin, which is conducive to improving driving safety, task efficiency, and user experience.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103294"},"PeriodicalIF":3.4,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and evaluation of Avatar: An ultra-low-latency immersive human–machine interface for teleoperation Avatar的设计与评估:用于远程操作的超低延迟沉浸式人机界面
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-19 DOI: 10.1016/j.displa.2025.103292
Junjie Li , Dewei Han , Jian Xu , Kang Li , Zhaoyuan Ma
Spatially separated teleoperation is crucial for inaccessible or hazardous scenarios but requires intuitive human–machine interfaces (HMIs) to ensure situational awareness, especially visual perception. While 360°panoramic vision offers immersion and a wide field of view, its high latency reduces efficiency and quality and causes motion sickness. This paper presents the Avatar system, an ultra-low-latency panoramic vision platform for teleoperation and telepresence. Using a convenient method, Avatar’s measured capture-to-display latency is only 220 ms. Two experiments with 43 participants demonstrated that Avatar achieves near-scene perception efficiency in near-field visual search. Its ultra-low latency also ensured high efficiency and quality in teleoperation tasks. Analysis of subjective questionnaires and physiological indicators confirmed that Avatar provides operators with intense immersion and presence. The system’s design and verification guide future universal, efficient HMI development for diverse applications.
空间分离远程操作对于难以接近或危险的场景至关重要,但需要直观的人机界面(hmi)来确保态势感知,特别是视觉感知。虽然360°全景视觉提供沉浸感和广阔的视野,但其高延迟降低了效率和质量,并导致晕动病。本文介绍了Avatar系统,一个用于远程操作和远程呈现的超低延迟全景视觉平台。使用一种方便的方法,Avatar的捕获到显示延迟仅为220毫秒。两个43人参与的实验表明,Avatar在近场视觉搜索中达到了近场景感知效率。它的超低延迟也保证了远程操作任务的高效率和高质量。主观问卷调查和生理指标分析证实,《阿凡达》为操作者提供了强烈的沉浸感和身临其境感。该系统的设计和验证指导了未来通用、高效的各种应用的人机界面开发。
{"title":"Design and evaluation of Avatar: An ultra-low-latency immersive human–machine interface for teleoperation","authors":"Junjie Li ,&nbsp;Dewei Han ,&nbsp;Jian Xu ,&nbsp;Kang Li ,&nbsp;Zhaoyuan Ma","doi":"10.1016/j.displa.2025.103292","DOIUrl":"10.1016/j.displa.2025.103292","url":null,"abstract":"<div><div>Spatially separated teleoperation is crucial for inaccessible or hazardous scenarios but requires intuitive human–machine interfaces (HMIs) to ensure situational awareness, especially visual perception. While 360°panoramic vision offers immersion and a wide field of view, its high latency reduces efficiency and quality and causes motion sickness. This paper presents the Avatar system, an ultra-low-latency panoramic vision platform for teleoperation and telepresence. Using a convenient method, Avatar’s measured capture-to-display latency is only 220 ms. Two experiments with 43 participants demonstrated that Avatar achieves near-scene perception efficiency in near-field visual search. Its ultra-low latency also ensured high efficiency and quality in teleoperation tasks. Analysis of subjective questionnaires and physiological indicators confirmed that Avatar provides operators with intense immersion and presence. The system’s design and verification guide future universal, efficient HMI development for diverse applications.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103292"},"PeriodicalIF":3.4,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An adaptive U-Net framework for dermatological lesion segmentation 皮肤病变分割的自适应U-Net框架
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-17 DOI: 10.1016/j.displa.2025.103290
Ru Huang , Zhimin Qian , Zhengbing Zhou , Zijian Chen , Jiannan Liu , Jing Han , Shuo Zhou , Jianhua He , Xiaoli Chu
With the deep integration of information technology, medical image segmentation has become a crucial tool for dermatological image analysis. However, existing dermatological lesion segmentation methods still face numerous challenges when dealing with complex lesion regions, which result in limited segmentation accuracy. Therefore, this study presents an adaptive segmentation network that draws inspiration from U-Net’s symmetric architecture, with the goal of improving the precision and generalizability of dermatological lesion segmentation. The proposed Visual Scaled Mamba (VSM) module incorporates residual pathways and adaptive scaling factors to enhance fine-grained feature extraction and enable hierarchical representation learning. Additionally, we propose the Multi-Scaled Cross-Axial Attention (MSCA) mechanism, integrating multiscale spatial features and enhancing blurred boundary recognition through dual cross-axial attention. Furthermore, we design an Adaptive Wave-Dilated Bottleneck (AWDB), employing adaptive dilated convolutions and wavelet transforms to improve feature representation and long-range dependency modeling. Through experimental results on the ISIC 2016, ISIC 2018, and PH2 public datasets show that our network achieves a good compromise between model complexity and segmentation accuracy, leading to considerable performance increases in dermatological image segmentation.
随着信息技术的深度融合,医学图像分割已成为皮肤学图像分析的重要工具。然而,现有的皮肤病变分割方法在处理复杂的病变区域时仍然面临诸多挑战,导致分割精度有限。因此,本研究从U-Net的对称架构中汲取灵感,提出了一种自适应分割网络,旨在提高皮肤病变分割的精度和泛化性。提出的可视化缩放曼巴(VSM)模块结合残差路径和自适应缩放因子来增强细粒度特征提取和分层表示学习。此外,我们提出了多尺度跨轴注意(MSCA)机制,通过双跨轴注意整合多尺度空间特征,增强模糊边界识别。此外,我们设计了一个自适应波扩展瓶颈(AWDB),采用自适应扩展卷积和小波变换来改进特征表示和远程依赖建模。通过在ISIC 2016、ISIC 2018和PH2公共数据集上的实验结果表明,我们的网络在模型复杂性和分割精度之间取得了很好的折衷,使得皮肤病学图像分割的性能有了很大的提高。
{"title":"An adaptive U-Net framework for dermatological lesion segmentation","authors":"Ru Huang ,&nbsp;Zhimin Qian ,&nbsp;Zhengbing Zhou ,&nbsp;Zijian Chen ,&nbsp;Jiannan Liu ,&nbsp;Jing Han ,&nbsp;Shuo Zhou ,&nbsp;Jianhua He ,&nbsp;Xiaoli Chu","doi":"10.1016/j.displa.2025.103290","DOIUrl":"10.1016/j.displa.2025.103290","url":null,"abstract":"<div><div>With the deep integration of information technology, medical image segmentation has become a crucial tool for dermatological image analysis. However, existing dermatological lesion segmentation methods still face numerous challenges when dealing with complex lesion regions, which result in limited segmentation accuracy. Therefore, this study presents an adaptive segmentation network that draws inspiration from U-Net’s symmetric architecture, with the goal of improving the precision and generalizability of dermatological lesion segmentation. The proposed Visual Scaled Mamba (VSM) module incorporates residual pathways and adaptive scaling factors to enhance fine-grained feature extraction and enable hierarchical representation learning. Additionally, we propose the Multi-Scaled Cross-Axial Attention (MSCA) mechanism, integrating multiscale spatial features and enhancing blurred boundary recognition through dual cross-axial attention. Furthermore, we design an Adaptive Wave-Dilated Bottleneck (AWDB), employing adaptive dilated convolutions and wavelet transforms to improve feature representation and long-range dependency modeling. Through experimental results on the ISIC 2016, ISIC 2018, and PH2 public datasets show that our network achieves a good compromise between model complexity and segmentation accuracy, leading to considerable performance increases in dermatological image segmentation.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103290"},"PeriodicalIF":3.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Texture generation and adaptive fusion networks for image inpainting 纹理生成和自适应融合网络用于图像绘制
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-17 DOI: 10.1016/j.displa.2025.103287
Wuzhen Shi, Wu Yang, Yang Wen
Image inpainting aims to reconstruct missing regions in images with visually realistic and semantically consistent content. Existing deep learning-based methods often rely on structural priors to guide the inpainting process, but these priors provide limited information for texture recovery, leading to blurred or inconsistent details. To address this issue, we propose a Texture Generation and Adaptive Fusion Network (TGAFNet) that explicitly models texture priors to enhance high-frequency texture generation and adaptive fusion. TGAFNet consists of two branches: a main branch for coarse image generation and refinement, and a texture branch for explicit texture synthesis. The texture branch exploits both contextual cues and multi-level features from the main branch to generate sharp texture maps under the guidance of adversarial training with SN-PatchGAN. Furthermore, a Texture Patch Adaptive Fusion (TPAF) module is introduced to perform patch-to-patch matching and adaptive fusion, effectively handling cross-domain misalignment between the generated texture and coarse images. Extensive experiments on multiple benchmark datasets demonstrate that TGAFNet achieves state-of-the-art performance, generating visually realistic and fine-textured results. The findings highlight the effectiveness of explicit texture priors and adaptive fusion mechanisms for high-fidelity image inpainting, offering a promising direction for future image restoration research.
图像修复的目的是重建图像中缺失的区域,使其具有视觉逼真和语义一致的内容。现有的基于深度学习的方法通常依赖于结构先验来指导涂漆过程,但这些先验为纹理恢复提供的信息有限,导致细节模糊或不一致。为了解决这一问题,我们提出了一种纹理生成和自适应融合网络(TGAFNet),该网络明确地对纹理先验进行建模,以增强高频纹理生成和自适应融合。TGAFNet由两个分支组成:用于粗图像生成和细化的主分支和用于显式纹理合成的纹理分支。纹理分支利用上下文线索和主分支的多层次特征,在SN-PatchGAN的对抗性训练指导下生成尖锐的纹理映射。引入纹理补丁自适应融合(TPAF)模块进行补丁间匹配和自适应融合,有效处理生成的纹理与粗糙图像之间的跨域不对齐问题。在多个基准数据集上进行的大量实验表明,TGAFNet达到了最先进的性能,产生了视觉逼真和精细纹理的结果。研究结果突出了显式纹理先验和自适应融合机制在高保真图像修复中的有效性,为未来图像修复研究提供了一个有希望的方向。
{"title":"Texture generation and adaptive fusion networks for image inpainting","authors":"Wuzhen Shi,&nbsp;Wu Yang,&nbsp;Yang Wen","doi":"10.1016/j.displa.2025.103287","DOIUrl":"10.1016/j.displa.2025.103287","url":null,"abstract":"<div><div>Image inpainting aims to reconstruct missing regions in images with visually realistic and semantically consistent content. Existing deep learning-based methods often rely on structural priors to guide the inpainting process, but these priors provide limited information for texture recovery, leading to blurred or inconsistent details. To address this issue, we propose a Texture Generation and Adaptive Fusion Network (TGAFNet) that explicitly models texture priors to enhance high-frequency texture generation and adaptive fusion. TGAFNet consists of two branches: a main branch for coarse image generation and refinement, and a texture branch for explicit texture synthesis. The texture branch exploits both contextual cues and multi-level features from the main branch to generate sharp texture maps under the guidance of adversarial training with SN-PatchGAN. Furthermore, a Texture Patch Adaptive Fusion (TPAF) module is introduced to perform patch-to-patch matching and adaptive fusion, effectively handling cross-domain misalignment between the generated texture and coarse images. Extensive experiments on multiple benchmark datasets demonstrate that TGAFNet achieves state-of-the-art performance, generating visually realistic and fine-textured results. The findings highlight the effectiveness of explicit texture priors and adaptive fusion mechanisms for high-fidelity image inpainting, offering a promising direction for future image restoration research.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103287"},"PeriodicalIF":3.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1