首页 > 最新文献

Displays最新文献

英文 中文
Design and evaluation of Avatar: An ultra-low-latency immersive human–machine interface for teleoperation Avatar的设计与评估:用于远程操作的超低延迟沉浸式人机界面
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-19 DOI: 10.1016/j.displa.2025.103292
Junjie Li , Dewei Han , Jian Xu , Kang Li , Zhaoyuan Ma
Spatially separated teleoperation is crucial for inaccessible or hazardous scenarios but requires intuitive human–machine interfaces (HMIs) to ensure situational awareness, especially visual perception. While 360°panoramic vision offers immersion and a wide field of view, its high latency reduces efficiency and quality and causes motion sickness. This paper presents the Avatar system, an ultra-low-latency panoramic vision platform for teleoperation and telepresence. Using a convenient method, Avatar’s measured capture-to-display latency is only 220 ms. Two experiments with 43 participants demonstrated that Avatar achieves near-scene perception efficiency in near-field visual search. Its ultra-low latency also ensured high efficiency and quality in teleoperation tasks. Analysis of subjective questionnaires and physiological indicators confirmed that Avatar provides operators with intense immersion and presence. The system’s design and verification guide future universal, efficient HMI development for diverse applications.
空间分离远程操作对于难以接近或危险的场景至关重要,但需要直观的人机界面(hmi)来确保态势感知,特别是视觉感知。虽然360°全景视觉提供沉浸感和广阔的视野,但其高延迟降低了效率和质量,并导致晕动病。本文介绍了Avatar系统,一个用于远程操作和远程呈现的超低延迟全景视觉平台。使用一种方便的方法,Avatar的捕获到显示延迟仅为220毫秒。两个43人参与的实验表明,Avatar在近场视觉搜索中达到了近场景感知效率。它的超低延迟也保证了远程操作任务的高效率和高质量。主观问卷调查和生理指标分析证实,《阿凡达》为操作者提供了强烈的沉浸感和身临其境感。该系统的设计和验证指导了未来通用、高效的各种应用的人机界面开发。
{"title":"Design and evaluation of Avatar: An ultra-low-latency immersive human–machine interface for teleoperation","authors":"Junjie Li ,&nbsp;Dewei Han ,&nbsp;Jian Xu ,&nbsp;Kang Li ,&nbsp;Zhaoyuan Ma","doi":"10.1016/j.displa.2025.103292","DOIUrl":"10.1016/j.displa.2025.103292","url":null,"abstract":"<div><div>Spatially separated teleoperation is crucial for inaccessible or hazardous scenarios but requires intuitive human–machine interfaces (HMIs) to ensure situational awareness, especially visual perception. While 360°panoramic vision offers immersion and a wide field of view, its high latency reduces efficiency and quality and causes motion sickness. This paper presents the Avatar system, an ultra-low-latency panoramic vision platform for teleoperation and telepresence. Using a convenient method, Avatar’s measured capture-to-display latency is only 220 ms. Two experiments with 43 participants demonstrated that Avatar achieves near-scene perception efficiency in near-field visual search. Its ultra-low latency also ensured high efficiency and quality in teleoperation tasks. Analysis of subjective questionnaires and physiological indicators confirmed that Avatar provides operators with intense immersion and presence. The system’s design and verification guide future universal, efficient HMI development for diverse applications.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103292"},"PeriodicalIF":3.4,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An adaptive U-Net framework for dermatological lesion segmentation 皮肤病变分割的自适应U-Net框架
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-17 DOI: 10.1016/j.displa.2025.103290
Ru Huang , Zhimin Qian , Zhengbing Zhou , Zijian Chen , Jiannan Liu , Jing Han , Shuo Zhou , Jianhua He , Xiaoli Chu
With the deep integration of information technology, medical image segmentation has become a crucial tool for dermatological image analysis. However, existing dermatological lesion segmentation methods still face numerous challenges when dealing with complex lesion regions, which result in limited segmentation accuracy. Therefore, this study presents an adaptive segmentation network that draws inspiration from U-Net’s symmetric architecture, with the goal of improving the precision and generalizability of dermatological lesion segmentation. The proposed Visual Scaled Mamba (VSM) module incorporates residual pathways and adaptive scaling factors to enhance fine-grained feature extraction and enable hierarchical representation learning. Additionally, we propose the Multi-Scaled Cross-Axial Attention (MSCA) mechanism, integrating multiscale spatial features and enhancing blurred boundary recognition through dual cross-axial attention. Furthermore, we design an Adaptive Wave-Dilated Bottleneck (AWDB), employing adaptive dilated convolutions and wavelet transforms to improve feature representation and long-range dependency modeling. Through experimental results on the ISIC 2016, ISIC 2018, and PH2 public datasets show that our network achieves a good compromise between model complexity and segmentation accuracy, leading to considerable performance increases in dermatological image segmentation.
随着信息技术的深度融合,医学图像分割已成为皮肤学图像分析的重要工具。然而,现有的皮肤病变分割方法在处理复杂的病变区域时仍然面临诸多挑战,导致分割精度有限。因此,本研究从U-Net的对称架构中汲取灵感,提出了一种自适应分割网络,旨在提高皮肤病变分割的精度和泛化性。提出的可视化缩放曼巴(VSM)模块结合残差路径和自适应缩放因子来增强细粒度特征提取和分层表示学习。此外,我们提出了多尺度跨轴注意(MSCA)机制,通过双跨轴注意整合多尺度空间特征,增强模糊边界识别。此外,我们设计了一个自适应波扩展瓶颈(AWDB),采用自适应扩展卷积和小波变换来改进特征表示和远程依赖建模。通过在ISIC 2016、ISIC 2018和PH2公共数据集上的实验结果表明,我们的网络在模型复杂性和分割精度之间取得了很好的折衷,使得皮肤病学图像分割的性能有了很大的提高。
{"title":"An adaptive U-Net framework for dermatological lesion segmentation","authors":"Ru Huang ,&nbsp;Zhimin Qian ,&nbsp;Zhengbing Zhou ,&nbsp;Zijian Chen ,&nbsp;Jiannan Liu ,&nbsp;Jing Han ,&nbsp;Shuo Zhou ,&nbsp;Jianhua He ,&nbsp;Xiaoli Chu","doi":"10.1016/j.displa.2025.103290","DOIUrl":"10.1016/j.displa.2025.103290","url":null,"abstract":"<div><div>With the deep integration of information technology, medical image segmentation has become a crucial tool for dermatological image analysis. However, existing dermatological lesion segmentation methods still face numerous challenges when dealing with complex lesion regions, which result in limited segmentation accuracy. Therefore, this study presents an adaptive segmentation network that draws inspiration from U-Net’s symmetric architecture, with the goal of improving the precision and generalizability of dermatological lesion segmentation. The proposed Visual Scaled Mamba (VSM) module incorporates residual pathways and adaptive scaling factors to enhance fine-grained feature extraction and enable hierarchical representation learning. Additionally, we propose the Multi-Scaled Cross-Axial Attention (MSCA) mechanism, integrating multiscale spatial features and enhancing blurred boundary recognition through dual cross-axial attention. Furthermore, we design an Adaptive Wave-Dilated Bottleneck (AWDB), employing adaptive dilated convolutions and wavelet transforms to improve feature representation and long-range dependency modeling. Through experimental results on the ISIC 2016, ISIC 2018, and PH2 public datasets show that our network achieves a good compromise between model complexity and segmentation accuracy, leading to considerable performance increases in dermatological image segmentation.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103290"},"PeriodicalIF":3.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Texture generation and adaptive fusion networks for image inpainting 纹理生成和自适应融合网络用于图像绘制
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-17 DOI: 10.1016/j.displa.2025.103287
Wuzhen Shi, Wu Yang, Yang Wen
Image inpainting aims to reconstruct missing regions in images with visually realistic and semantically consistent content. Existing deep learning-based methods often rely on structural priors to guide the inpainting process, but these priors provide limited information for texture recovery, leading to blurred or inconsistent details. To address this issue, we propose a Texture Generation and Adaptive Fusion Network (TGAFNet) that explicitly models texture priors to enhance high-frequency texture generation and adaptive fusion. TGAFNet consists of two branches: a main branch for coarse image generation and refinement, and a texture branch for explicit texture synthesis. The texture branch exploits both contextual cues and multi-level features from the main branch to generate sharp texture maps under the guidance of adversarial training with SN-PatchGAN. Furthermore, a Texture Patch Adaptive Fusion (TPAF) module is introduced to perform patch-to-patch matching and adaptive fusion, effectively handling cross-domain misalignment between the generated texture and coarse images. Extensive experiments on multiple benchmark datasets demonstrate that TGAFNet achieves state-of-the-art performance, generating visually realistic and fine-textured results. The findings highlight the effectiveness of explicit texture priors and adaptive fusion mechanisms for high-fidelity image inpainting, offering a promising direction for future image restoration research.
图像修复的目的是重建图像中缺失的区域,使其具有视觉逼真和语义一致的内容。现有的基于深度学习的方法通常依赖于结构先验来指导涂漆过程,但这些先验为纹理恢复提供的信息有限,导致细节模糊或不一致。为了解决这一问题,我们提出了一种纹理生成和自适应融合网络(TGAFNet),该网络明确地对纹理先验进行建模,以增强高频纹理生成和自适应融合。TGAFNet由两个分支组成:用于粗图像生成和细化的主分支和用于显式纹理合成的纹理分支。纹理分支利用上下文线索和主分支的多层次特征,在SN-PatchGAN的对抗性训练指导下生成尖锐的纹理映射。引入纹理补丁自适应融合(TPAF)模块进行补丁间匹配和自适应融合,有效处理生成的纹理与粗糙图像之间的跨域不对齐问题。在多个基准数据集上进行的大量实验表明,TGAFNet达到了最先进的性能,产生了视觉逼真和精细纹理的结果。研究结果突出了显式纹理先验和自适应融合机制在高保真图像修复中的有效性,为未来图像修复研究提供了一个有希望的方向。
{"title":"Texture generation and adaptive fusion networks for image inpainting","authors":"Wuzhen Shi,&nbsp;Wu Yang,&nbsp;Yang Wen","doi":"10.1016/j.displa.2025.103287","DOIUrl":"10.1016/j.displa.2025.103287","url":null,"abstract":"<div><div>Image inpainting aims to reconstruct missing regions in images with visually realistic and semantically consistent content. Existing deep learning-based methods often rely on structural priors to guide the inpainting process, but these priors provide limited information for texture recovery, leading to blurred or inconsistent details. To address this issue, we propose a Texture Generation and Adaptive Fusion Network (TGAFNet) that explicitly models texture priors to enhance high-frequency texture generation and adaptive fusion. TGAFNet consists of two branches: a main branch for coarse image generation and refinement, and a texture branch for explicit texture synthesis. The texture branch exploits both contextual cues and multi-level features from the main branch to generate sharp texture maps under the guidance of adversarial training with SN-PatchGAN. Furthermore, a Texture Patch Adaptive Fusion (TPAF) module is introduced to perform patch-to-patch matching and adaptive fusion, effectively handling cross-domain misalignment between the generated texture and coarse images. Extensive experiments on multiple benchmark datasets demonstrate that TGAFNet achieves state-of-the-art performance, generating visually realistic and fine-textured results. The findings highlight the effectiveness of explicit texture priors and adaptive fusion mechanisms for high-fidelity image inpainting, offering a promising direction for future image restoration research.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103287"},"PeriodicalIF":3.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Teacher–student adversarial YOLO for domain adaptive detection in traffic scenes under adverse weather 基于师生对抗YOLO的恶劣天气交通场景域自适应检测
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-17 DOI: 10.1016/j.displa.2025.103289
Xuejuan Han , Zhong Qu , Shufang Xia
The problem of difficult traffic object localization under adverse weather has not been solved due to the labor-intensive and time-consuming process of collecting and labeling large-scale data. Domain adaptive object detection (DAOD) can achieve cross-domain detection without labeling, however, most of the existing DAOD methods are based on two-stage Faster R-CNN, which needs to be improved in both accuracy and speed. We propose a DAOD method TSA-YOLO, which takes full advantage of adversarial learning and pseudo-labeling to achieve high-performance cross-domain detection for fog, rain, and low-light scenes. For the input images, we generate auxiliary domain images by CycleGAN, also design a strong and weak enhancement method to reduce the bias of the teacher and student models. Additionally, in the student self-learning module, we propose a pixel-level domain discriminator to better extract domain-invariant features, effectively narrowing the feature distribution gap between the source and target domains. In the teacher–student mutual learning module, we incorporate the mean teacher (MT) model, iteratively update parameters to generate high-quality pseudo-labels. In addition, we evaluate our method on the public datasets Foggy Cityscapes, Rain Cityscapes, and BDD100k_Dark. The results show that TSA-YOLO significantly improves detection performance. Specifically, compared with the baseline, TSA-YOLO achieves up to a 15.0% increase in [email protected] on Foggy Cityscapes and up to an 18.5% increase on Rain Cityscapes, while converging in only 50 epochs and without reducing the model’s inference speed.
由于采集和标注大规模数据的劳动强度大、耗时长,恶劣天气条件下交通目标定位困难的问题一直没有得到解决。域自适应目标检测(Domain adaptive object detection, DAOD)可以实现无需标记的跨域检测,但现有的DAOD方法大多基于两阶段Faster R-CNN,精度和速度都有待提高。我们提出了一种DAOD方法TSA-YOLO,该方法充分利用了对抗学习和伪标记的优势,实现了对雾、雨和低光场景的高性能跨域检测。对于输入图像,我们使用CycleGAN生成辅助域图像,并设计了强弱增强方法来减少教师和学生模型的偏差。此外,在学生自学习模块中,我们提出了一种像素级域鉴别器来更好地提取域不变特征,有效缩小源域和目标域之间的特征分布差距。在师生互学模块中,我们引入了均值教师模型,迭代更新参数以生成高质量的伪标签。此外,我们还在雾蒙蒙的城市景观、下雨的城市景观和BDD100k_Dark的公共数据集上对我们的方法进行了评估。结果表明,TSA-YOLO显著提高了检测性能。具体来说,与基线相比,TSA-YOLO在雾蒙蒙的城市景观上实现了15.0%的增长,在雨蒙蒙的城市景观上实现了18.5%的增长,同时只收敛了50个epoch,并且没有降低模型的推理速度。
{"title":"Teacher–student adversarial YOLO for domain adaptive detection in traffic scenes under adverse weather","authors":"Xuejuan Han ,&nbsp;Zhong Qu ,&nbsp;Shufang Xia","doi":"10.1016/j.displa.2025.103289","DOIUrl":"10.1016/j.displa.2025.103289","url":null,"abstract":"<div><div>The problem of difficult traffic object localization under adverse weather has not been solved due to the labor-intensive and time-consuming process of collecting and labeling large-scale data. Domain adaptive object detection (DAOD) can achieve cross-domain detection without labeling, however, most of the existing DAOD methods are based on two-stage Faster R-CNN, which needs to be improved in both accuracy and speed. We propose a DAOD method TSA-YOLO, which takes full advantage of adversarial learning and pseudo-labeling to achieve high-performance cross-domain detection for fog, rain, and low-light scenes. For the input images, we generate auxiliary domain images by CycleGAN, also design a strong and weak enhancement method to reduce the bias of the teacher and student models. Additionally, in the student self-learning module, we propose a pixel-level domain discriminator to better extract domain-invariant features, effectively narrowing the feature distribution gap between the source and target domains. In the teacher–student mutual learning module, we incorporate the mean teacher (MT) model, iteratively update parameters to generate high-quality pseudo-labels. In addition, we evaluate our method on the public datasets Foggy Cityscapes, Rain Cityscapes, and BDD100k_Dark. The results show that TSA-YOLO significantly improves detection performance. Specifically, compared with the baseline, TSA-YOLO achieves up to a 15.0% increase in <em>[email protected]</em> on Foggy Cityscapes and up to an 18.5% increase on Rain Cityscapes, while converging in only 50 epochs and without reducing the model’s inference speed.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103289"},"PeriodicalIF":3.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual speaker authentication via lip motions: Appearance consistency and semantic disentanglement 通过唇形运动的视觉说话人认证:外观一致性和语义解纠缠
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-17 DOI: 10.1016/j.displa.2025.103288
Dawei Luo, Dongliang Xie, Wanpeng Xie
Lip-based visual biometric technology shows significant potential for improving the security of identity authentication in human–computer interaction. However, variations in lip contours and the entanglement of dynamic and semantic features limit its performance. To tackle these challenges, we revisit the personalized characteristics in lip-motion signals and propose a lip-based authentication framework based on personalized feature modeling. Specifically, the framework adopts a “shallow 3D CNN + deep 2D CNN” architecture to extract dynamic lip appearance features during speech, and introduces an appearance consistency loss to capture spatially invariant features across frames. For dynamic features, a semantic decoupling strategy is proposed to force the model to learn lip-motion patterns that are independent of semantic content. Additionally, we design a dynamic password authentication method based on visual speech recognition (VSR) to enhance system security. In our approach, appearance and motion patterns are used for speaker verification, while VSR results are used for passphrase verification — they working jointly. Experiments on the ICSLR and GRID datasets show that our method achieves excellent performance in terms of authentication accuracy and robustness, highlighting its potential in secure human–computer interaction scenarios. The code is made publicly available at https://github.com/Davi32ML/VSALip.
基于嘴唇的视觉生物识别技术在提高人机交互中身份认证的安全性方面显示出巨大的潜力。然而,唇形的变化以及动态和语义特征的纠缠限制了其性能。为了解决这些问题,我们重新审视了嘴唇运动信号中的个性化特征,并提出了一种基于个性化特征建模的基于嘴唇的认证框架。具体而言,该框架采用“浅3D CNN +深2D CNN”架构提取语音过程中的动态唇形特征,并引入外观一致性损失来捕获跨帧的空间不变特征。对于动态特征,提出了一种语义解耦策略,迫使模型学习与语义内容无关的唇动模式。此外,我们还设计了一种基于视觉语音识别(VSR)的动态口令认证方法,以提高系统的安全性。在我们的方法中,外观和运动模式用于说话人验证,而VSR结果用于密码短语验证-它们共同工作。在ICSLR和GRID数据集上的实验表明,该方法在认证精度和鲁棒性方面取得了优异的性能,突出了其在安全人机交互场景中的潜力。该代码可在https://github.com/Davi32ML/VSALip上公开获取。
{"title":"Visual speaker authentication via lip motions: Appearance consistency and semantic disentanglement","authors":"Dawei Luo,&nbsp;Dongliang Xie,&nbsp;Wanpeng Xie","doi":"10.1016/j.displa.2025.103288","DOIUrl":"10.1016/j.displa.2025.103288","url":null,"abstract":"<div><div>Lip-based visual biometric technology shows significant potential for improving the security of identity authentication in human–computer interaction. However, variations in lip contours and the entanglement of dynamic and semantic features limit its performance. To tackle these challenges, we revisit the personalized characteristics in lip-motion signals and propose a lip-based authentication framework based on personalized feature modeling. Specifically, the framework adopts a “shallow 3D CNN + deep 2D CNN” architecture to extract dynamic lip appearance features during speech, and introduces an appearance consistency loss to capture spatially invariant features across frames. For dynamic features, a semantic decoupling strategy is proposed to force the model to learn lip-motion patterns that are independent of semantic content. Additionally, we design a dynamic password authentication method based on visual speech recognition (VSR) to enhance system security. In our approach, appearance and motion patterns are used for speaker verification, while VSR results are used for passphrase verification — they working jointly. Experiments on the ICSLR and GRID datasets show that our method achieves excellent performance in terms of authentication accuracy and robustness, highlighting its potential in secure human–computer interaction scenarios. The code is made publicly available at <span><span>https://github.com/Davi32ML/VSALip</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103288"},"PeriodicalIF":3.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Progressive multi-level learning for gloss-free sign language translation 渐进式多层次无光泽手语翻译学习
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-14 DOI: 10.1016/j.displa.2025.103285
Yingchun Xie , Wei Su , Shukai Chen , Jinzhao Wu , Chuan Cai , Yongna Yuan
Gloss-free sign language translation is a key focus in sign language translation research, enabling effective communication between the deaf and the hearing individuals in a broader and more universal manner. In this work, we propose a Progressive Multi-Level Learning model for sign language translation (PML-SLT), which progressively learns sign representations to improve video understanding. Rather than requiring every frame to attend to all other frames during attention computation, our approach introduces a progressive perceptual field expansion mechanism that gradually broadens the attention scope across video frames. This mechanism continuously expands the perceptual field between frames, effectively capturing both local and global information. Besides, to fully exploit multi-granularity information, we employ a multi-level feature integration scheme that transfers the output of each encoder layer to the corresponding decoder layer, enabling comprehensive utilization of hierarchical temporal features. Additionally, we introduce a multi-modal triplet loss to harmonize semantic information across modalities, aligning the text space with the video space so that the video features acquire richer semantic meaning. Experimental results on two public datasets demonstrate the promising translation performance of the proposed PML-SLT model.
无光泽手语翻译是手语翻译研究的一个重点,它使聋人与听人之间的有效交流更广泛、更普遍。在这项工作中,我们提出了一个渐进的多层次学习模型,用于手语翻译(PML-SLT),该模型逐步学习手势表示以提高视频理解能力。在注意力计算过程中,我们的方法不是要求每一帧都关注所有其他帧,而是引入了一种渐进的感知场扩展机制,逐渐拓宽了视频帧之间的注意力范围。该机制不断扩展帧间的感知域,有效地捕获局部和全局信息。此外,为了充分利用多粒度信息,我们采用了多层次特征集成方案,将每个编码器层的输出传输到相应的解码器层,从而能够综合利用分层时间特征。此外,我们引入了一种多模态三联体损失来协调跨模态的语义信息,使文本空间与视频空间对齐,从而使视频特征获得更丰富的语义含义。在两个公共数据集上的实验结果表明,所提出的PML-SLT模型具有良好的翻译性能。
{"title":"Progressive multi-level learning for gloss-free sign language translation","authors":"Yingchun Xie ,&nbsp;Wei Su ,&nbsp;Shukai Chen ,&nbsp;Jinzhao Wu ,&nbsp;Chuan Cai ,&nbsp;Yongna Yuan","doi":"10.1016/j.displa.2025.103285","DOIUrl":"10.1016/j.displa.2025.103285","url":null,"abstract":"<div><div>Gloss-free sign language translation is a key focus in sign language translation research, enabling effective communication between the deaf and the hearing individuals in a broader and more universal manner. In this work, we propose a Progressive Multi-Level Learning model for sign language translation (PML-SLT), which progressively learns sign representations to improve video understanding. Rather than requiring every frame to attend to all other frames during attention computation, our approach introduces a progressive perceptual field expansion mechanism that gradually broadens the attention scope across video frames. This mechanism continuously expands the perceptual field between frames, effectively capturing both local and global information. Besides, to fully exploit multi-granularity information, we employ a multi-level feature integration scheme that transfers the output of each encoder layer to the corresponding decoder layer, enabling comprehensive utilization of hierarchical temporal features. Additionally, we introduce a multi-modal triplet loss to harmonize semantic information across modalities, aligning the text space with the video space so that the video features acquire richer semantic meaning. Experimental results on two public datasets demonstrate the promising translation performance of the proposed PML-SLT model.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103285"},"PeriodicalIF":3.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bioinspired micro-/nano-composite structures for simultaneous enhancement of light extraction efficiency and output uniformity in Micro-LEDs 生物启发微/纳米复合结构,同时提高光提取效率和输出均匀性
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-13 DOI: 10.1016/j.displa.2025.103286
Jingyu Liu , Jiawei Zhang , Zhenyou Zou , Yibin Lin , Jinyu Ye , Wenfu Huang , Chaoxing Wu , Yongai Zhang , Jie Sun , Qun Yan , Xiongtu Zhou
The strong total internal reflection (TIR) in micro light-emitting diodes (Micro-LEDs) significantly limits light extraction efficiency (LEE) and uniformity of light distribution, thereby hindering their industrial applications. Inspired by the layered surface structures found in firefly lanterns, this study proposes a flexible bioinspired micro-/nano-composite structure that effectively enhances both LEE and the uniformity of light output. Finite-Difference Time-Domain (FDTD) simulations demonstrate that microstructures contribute to directional light extraction, whereas nanostructures facilitate overall optical optimization. A novel fabrication approach integrating grayscale photolithography, mechanical stretching, and plasma treatment was developed, enabling the realization of micro-/nano-composite structures with tunable design parameters. Experimental results indicate a 40.5% increase in external quantum efficiency (EQE) and a 41.6% improvement in power efficiency (PE) for blue Micro-LEDs, accompanied by enhanced angular light distribution, leading to wider viewing angles and near-ideal light uniformity. This advancement effectively resolves the longstanding challenge of balancing efficiency and uniformity in light extraction, thereby facilitating the industrialization of Micro-LED technology.
微发光二极管(micro - led)的强全内反射(TIR)严重限制了光提取效率(LEE)和光分布均匀性,从而阻碍了其工业应用。受萤火虫灯笼中发现的分层表面结构的启发,本研究提出了一种灵活的仿生微/纳米复合结构,有效地提高了LEE和光输出的均匀性。时域有限差分(FDTD)模拟表明,微结构有助于定向光提取,而纳米结构有助于整体光学优化。开发了一种集成灰度光刻、机械拉伸和等离子体处理的新型制造方法,实现了具有可调设计参数的微/纳米复合材料结构。实验结果表明,蓝色micro - led的外量子效率(EQE)提高了40.5%,功率效率(PE)提高了41.6%,同时增强了角光分布,从而实现了更宽的视角和接近理想的光均匀性。这一进步有效地解决了平衡光提取效率和均匀性的长期挑战,从而促进了Micro-LED技术的产业化。
{"title":"Bioinspired micro-/nano-composite structures for simultaneous enhancement of light extraction efficiency and output uniformity in Micro-LEDs","authors":"Jingyu Liu ,&nbsp;Jiawei Zhang ,&nbsp;Zhenyou Zou ,&nbsp;Yibin Lin ,&nbsp;Jinyu Ye ,&nbsp;Wenfu Huang ,&nbsp;Chaoxing Wu ,&nbsp;Yongai Zhang ,&nbsp;Jie Sun ,&nbsp;Qun Yan ,&nbsp;Xiongtu Zhou","doi":"10.1016/j.displa.2025.103286","DOIUrl":"10.1016/j.displa.2025.103286","url":null,"abstract":"<div><div>The strong total internal reflection (TIR) in micro light-emitting diodes (Micro-LEDs) significantly limits light extraction efficiency (LEE) and uniformity of light distribution, thereby hindering their industrial applications. Inspired by the layered surface structures found in firefly lanterns, this study proposes a flexible bioinspired micro-/nano-composite structure that effectively enhances both LEE and the uniformity of light output. Finite-Difference Time-Domain (FDTD) simulations demonstrate that microstructures contribute to directional light extraction, whereas nanostructures facilitate overall optical optimization. A novel fabrication approach integrating grayscale photolithography, mechanical stretching, and plasma treatment was developed, enabling the realization of micro-/nano-composite structures with tunable design parameters. Experimental results indicate a 40.5% increase in external quantum efficiency (EQE) and a 41.6% improvement in power efficiency (PE) for blue Micro-LEDs, accompanied by enhanced angular light distribution, leading to wider viewing angles and near-ideal light uniformity. This advancement effectively resolves the longstanding challenge of balancing efficiency and uniformity in light extraction, thereby facilitating the industrialization of Micro-LED technology.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103286"},"PeriodicalIF":3.4,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring points for video subjective assessment – Impact of memory and stimulus variability 视频主观评价用测量点。记忆和刺激变异性的影响
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-12 DOI: 10.1016/j.displa.2025.103283
Tomasz Konaszyński , Avrajyoti Dutta , Burak Gizlice , Dawid Juszka , Mikołaj Leszczuk
The work describes a QoE experiment concerning the assessment of the impact of memory and stimulus variability on subjective assessments of 2D videos, as well as an attempt to identify dominant points − moments in time or events influencing the overall assessment of the changing quality of the assessed films. Based on the results of the conducted QoE experiment, the impact of varying video quality on subjective assessment of 2D videos was clearly demonstrated, both in terms of results eligibility and subjective ratings.
The concept of “measurement points” was introduced, i.e., points in time or events that were associated with the highest impact on the values of subjective ratings when variable quality videos are assessed or videos are displayed in variable controlled environment.
The relationship between the memory of particular aspects of the video presentation, including the memory of subsequent appearances of the given video, and the values obtained from the assessment results were also demonstrated. There were observed regularities, including very strong negative effect of the variability of the technical quality of the rated videos on results eligibility, effect of boredom/annoyance from watching a longer video of variable quality, “last impression effect”, i.e. videos whose changing quality increases over time achieve higher MOS values than videos whose quality decreases over time and better assessments of “fresh” observations in comparison to the following ones.
该工作描述了一个QoE实验,该实验涉及评估记忆和刺激可变性对2D视频主观评估的影响,以及试图确定主导点-影响被评估电影质量变化的总体评估的时间时刻或事件。基于所进行的QoE实验结果,从结果合格性和主观评分两方面清楚地展示了不同视频质量对2D视频主观评价的影响。引入了“测量点”的概念,即在评估可变质量视频或在可变控制环境中显示视频时,对主观评分值影响最大的时间点或事件。对视频演示的特定方面的记忆,包括对给定视频随后出现的记忆,与从评估结果中获得的值之间的关系也得到了证明。有观察到的规律,包括评分视频的技术质量可变性对结果合格性的非常强的负面影响,观看可变质量的较长视频的无聊/烦恼的影响,“最后印象效应”,即质量随时间变化而增加的视频比质量随时间下降的视频获得更高的MOS值,并且与以下视频相比,对“新鲜”观察的评估更好。
{"title":"Measuring points for video subjective assessment – Impact of memory and stimulus variability","authors":"Tomasz Konaszyński ,&nbsp;Avrajyoti Dutta ,&nbsp;Burak Gizlice ,&nbsp;Dawid Juszka ,&nbsp;Mikołaj Leszczuk","doi":"10.1016/j.displa.2025.103283","DOIUrl":"10.1016/j.displa.2025.103283","url":null,"abstract":"<div><div>The work describes a QoE experiment concerning the assessment of the impact of memory and stimulus variability on subjective assessments of 2D videos, as well as an attempt to identify dominant points − moments in time or events influencing the overall assessment of the changing quality of the assessed films. Based on the results of the conducted QoE experiment, the impact of varying video quality on subjective assessment of 2D videos was clearly demonstrated, both in terms of results eligibility and subjective ratings.</div><div>The concept of “measurement points” was introduced, i.e., points in time or events that were associated with the highest impact on the values of subjective ratings when variable quality videos are assessed or videos are displayed in variable controlled environment.</div><div>The relationship between the memory of particular aspects of the video presentation, including the memory of subsequent appearances of the given video, and the values obtained from the assessment results were also demonstrated. There were observed regularities, including very strong negative effect of the variability of the technical quality of the rated videos on results eligibility, effect of boredom/annoyance from watching a longer video of variable quality, “last impression effect”, i.e. videos whose changing quality increases over time achieve higher MOS values than videos whose quality decreases over time and better assessments of “fresh” observations in comparison to the following ones.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103283"},"PeriodicalIF":3.4,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MASGC: Hybrid attention and synchronous graph learning for monocular 3D pose estimation 用于单目三维姿态估计的混合注意和同步图学习
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-12 DOI: 10.1016/j.displa.2025.103284
Shengjie Li , Jin Wang , Jianwei Niu , Yuanhang Wang , Haiyun Zhang , Guodong Lu , Jingru Yang , Xiaolong Yu , Renluan Hou
Occlusion and depth ambiguity pose significant challenges to the accuracy of monocular 3D human pose estimation. To tackle these issues, this study presents a two-stage pose estimation method based on Multi-Attention and Synchronous-Graph-Convolution (MASGC). In the first stage (2D pose estimation), a feature pyramid convolutional attention (FPCA) module is designed based on a multiresolution feature pyramid (MFP) and a convolutional attention triplet (CAT), which integrates channel, coordinate, and spatial attention, enabling the model to focus on the most salient features and mitigate location information loss caused by global pooling, thereby improving estimation accuracy. In the second stage (lifting to 3D), a temporal synchronous graph convolutional network (TSGCN) is designed. By incorporating multi-head attention and expanding the receptive field of end keypoints through topological temporal convolutions, TSGCN effectively addresses the challenges of occlusion and depth ambiguity in monocular 3D human pose estimation. Experimental results show that MASGC outperforms the compared baseline methods on benchmark datasets, including Human3.6 M and a custom dual-arm dataset, while reducing computational complexity compared to mainstream models. The code is available at https://github.com/JasonLi-30/MASGC.
遮挡和深度模糊对单眼三维人体姿态估计的准确性提出了重大挑战。为了解决这些问题,本研究提出了一种基于多注意同步图卷积(MASGC)的两阶段姿态估计方法。第一阶段(2D姿态估计),基于多分辨率特征金字塔(MFP)和卷积注意三重体(CAT)设计特征金字塔卷积注意(FPCA)模块,融合了通道、坐标和空间注意,使模型能够关注最显著的特征,减轻全局池化导致的位置信息丢失,从而提高估计精度。第二阶段(提升至三维),设计时序同步图卷积网络(TSGCN)。TSGCN通过引入多头注意力,并通过拓扑时间卷积扩展端点的接受野,有效解决了单眼三维人体姿态估计中的遮挡和深度模糊问题。实验结果表明,MASGC在包括Human3.6 M和自定义双臂数据集在内的基准数据集上优于比较基线方法,同时与主流模型相比降低了计算复杂度。代码可在https://github.com/JasonLi-30/MASGC上获得。
{"title":"MASGC: Hybrid attention and synchronous graph learning for monocular 3D pose estimation","authors":"Shengjie Li ,&nbsp;Jin Wang ,&nbsp;Jianwei Niu ,&nbsp;Yuanhang Wang ,&nbsp;Haiyun Zhang ,&nbsp;Guodong Lu ,&nbsp;Jingru Yang ,&nbsp;Xiaolong Yu ,&nbsp;Renluan Hou","doi":"10.1016/j.displa.2025.103284","DOIUrl":"10.1016/j.displa.2025.103284","url":null,"abstract":"<div><div>Occlusion and depth ambiguity pose significant challenges to the accuracy of monocular 3D human pose estimation. To tackle these issues, this study presents a two-stage pose estimation method based on Multi-Attention and Synchronous-Graph-Convolution (MASGC). In the first stage (2D pose estimation), a feature pyramid convolutional attention (FPCA) module is designed based on a multiresolution feature pyramid (MFP) and a convolutional attention triplet (CAT), which integrates channel, coordinate, and spatial attention, enabling the model to focus on the most salient features and mitigate location information loss caused by global pooling, thereby improving estimation accuracy. In the second stage (lifting to 3D), a temporal synchronous graph convolutional network (TSGCN) is designed. By incorporating multi-head attention and expanding the receptive field of end keypoints through topological temporal convolutions, TSGCN effectively addresses the challenges of occlusion and depth ambiguity in monocular 3D human pose estimation. Experimental results show that MASGC outperforms the compared baseline methods on benchmark datasets, including Human3.6 M and a custom dual-arm dataset, while reducing computational complexity compared to mainstream models. The code is available at <span><span>https://github.com/JasonLi-30/MASGC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103284"},"PeriodicalIF":3.4,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145580487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision language model based panel digit recognition for medical screen data acquisition 基于视觉语言模型的面板数字识别医学屏幕数据采集
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-11-11 DOI: 10.1016/j.displa.2025.103282
Yizhi Zou , Shuangjie Yuan , Haoyu Liu , Xu Cheng , Tao Zhu , Lu Yang
In the cardiac operating room, physicians interpret patients’ vital signs from medical equipment to make critical decisions, such as administering blood transfusions. However, the absence of automated data acquisition in these devices significantly complicates the documentation of surgical information. Existing text recognition methods are often limited to single applications and lack broad generalization capabilities, with inconsistent detection and recognition times. We present a novel medical device screen recognition framework based on pretrained Vision Language Models (VLMs). The structure based on the vision language model significantly enhances the flexibility of the application scenario, and the multi-round dialogue is more humanized, allowing for a better understanding of the surgeon’s needs. Considering the existence of unclear image information acquired by a head-mounted camera, for the acquisition of screen data, we propose Medical Screen Data Acquisition-VLM (MSDA-VLM) with a pre-filtering module to detect image blur. This module detects heavy ghost images via Local Binary Pattern (LBP) based texture block matching and assesses image sharpness through the variance of deep feature maps. Trained by thousands of screen images on the pretrained VLM, we achieve a 17.07% improvement in precision and a 17.05% improvement in recall. Furthermore, the experiment results demonstrate notable enhancements in medical screen data recognition.
在心脏手术室,医生通过医疗设备解读病人的生命体征来做出关键决定,比如输血。然而,在这些设备中缺乏自动数据采集,使得手术信息的记录变得非常复杂。现有的文本识别方法往往局限于单一应用,缺乏广泛的泛化能力,检测和识别时间不一致。提出了一种基于预训练视觉语言模型(VLMs)的医疗设备屏幕识别框架。基于视觉语言模型的结构大大增强了应用场景的灵活性,多轮对话更加人性化,可以更好地了解外科医生的需求。针对头戴式摄像头采集的图像信息不清晰的问题,针对屏幕数据的采集,我们提出了一种带有预滤波模块的医学屏幕数据采集- vlm (MSDA-VLM),用于检测图像模糊。该模块通过基于局部二值模式(LBP)的纹理块匹配检测重鬼图像,并通过深度特征图的方差评估图像的清晰度。在预训练的VLM上使用数千个屏幕图像进行训练,我们的准确率提高了17.07%,召回率提高了17.05%。此外,实验结果表明,在医学屏幕数据识别显著增强。
{"title":"Vision language model based panel digit recognition for medical screen data acquisition","authors":"Yizhi Zou ,&nbsp;Shuangjie Yuan ,&nbsp;Haoyu Liu ,&nbsp;Xu Cheng ,&nbsp;Tao Zhu ,&nbsp;Lu Yang","doi":"10.1016/j.displa.2025.103282","DOIUrl":"10.1016/j.displa.2025.103282","url":null,"abstract":"<div><div>In the cardiac operating room, physicians interpret patients’ vital signs from medical equipment to make critical decisions, such as administering blood transfusions. However, the absence of automated data acquisition in these devices significantly complicates the documentation of surgical information. Existing text recognition methods are often limited to single applications and lack broad generalization capabilities, with inconsistent detection and recognition times. We present a novel medical device screen recognition framework based on pretrained Vision Language Models (VLMs). The structure based on the vision language model significantly enhances the flexibility of the application scenario, and the multi-round dialogue is more humanized, allowing for a better understanding of the surgeon’s needs. Considering the existence of unclear image information acquired by a head-mounted camera, for the acquisition of screen data, we propose Medical Screen Data Acquisition-VLM (MSDA-VLM) with a pre-filtering module to detect image blur. This module detects heavy ghost images via Local Binary Pattern (LBP) based texture block matching and assesses image sharpness through the variance of deep feature maps. Trained by thousands of screen images on the pretrained VLM, we achieve a 17.07% improvement in precision and a 17.05% improvement in recall. Furthermore, the experiment results demonstrate notable enhancements in medical screen data recognition.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103282"},"PeriodicalIF":3.4,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1