首页 > 最新文献

Displays最新文献

英文 中文
Thermal memory of chair: Physics-guided generalization for sitting posture inference using thermal imaging 椅子的热记忆:利用热成像进行坐姿推断的物理指导推广
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2025-12-30 DOI: 10.1016/j.displa.2025.103331
Jin Ai , Gan Pei , Bitao Ma , Menghan Hu , Jian Zhang
Thermal imaging offers a viable approach for contactless posture monitoring due to its privacy-preserving nature and ability to capture residual thermal patterns. Existing methods exhibit limited generalization capabilities across different materials and thermal decay stages, coupled with a lack of reliable physical interpretability. To address these challenges, this study proposes an integrated paradigm combining generative data augmentation, visual transformer classification, and finite element (FE) simulation. The proposed pipeline first enhances data diversity through a generative model, then employs a Transformer-based classifier to achieve accurate recognition of 9 sitting postures. Finally, a heat conduction model is constructed to simulate the real thermal decay temperature field, decoding the influence of material and time on buttock thermal patterns. Through this paradigm, we identify a critical temperature difference threshold of 2.6 ± 0.06 K, beyond which model performance significantly degrades. Systematic analysis demonstrates that maintaining surface temperatures above this threshold during the initial 30 s enables the model to sustain accuracy above 85%. Furthermore, we quantified the direct impact of material thermophysical parameters on the effective detection window, revealing that materials with lower thermal conductivity (e.g., plastics) extend reliable identification duration. Validation on an independent test set featuring two materials and varying decay durations demonstrated a classification accuracy of 0.9162. This study establishes a thermal imaging-based posture analysis paradigm, providing a theoretical foundation and practical solutions for real-world applications in privacy-sensitive scenarios by decoding buttock thermal patterns. The dataset and code supporting this study are publicly available at: https://github.com/AJ-1995/Thermal-Memory-of-Chair.
热成像由于其隐私保护特性和捕获残余热模式的能力,为非接触式姿势监测提供了一种可行的方法。现有方法在不同材料和热衰变阶段的泛化能力有限,并且缺乏可靠的物理可解释性。为了应对这些挑战,本研究提出了一种结合生成数据增强、可视化变压器分类和有限元(FE)模拟的集成范式。该管道首先通过生成模型增强数据多样性,然后采用基于transformer的分类器实现对9种坐姿的准确识别。最后,建立了一个热传导模型来模拟真实的热衰减温度场,解码了材料和时间对臀部热模式的影响。通过这种模式,我们确定了2.6±0.06 K的临界温差阈值,超过该阈值,模型性能将显著下降。系统分析表明,在最初的30秒内保持地表温度高于该阈值可以使模型保持85%以上的精度。此外,我们量化了材料热物性参数对有效检测窗口的直接影响,揭示了导热系数较低的材料(如塑料)延长了可靠的识别持续时间。在具有两种材料和不同衰变持续时间的独立测试集上进行验证,分类准确率为0.9162。本研究建立了基于热成像的姿态分析范式,通过解码臀部热模式,为隐私敏感场景下的实际应用提供理论基础和实践解决方案。支持这项研究的数据集和代码可在https://github.com/AJ-1995/Thermal-Memory-of-Chair上公开获取。
{"title":"Thermal memory of chair: Physics-guided generalization for sitting posture inference using thermal imaging","authors":"Jin Ai ,&nbsp;Gan Pei ,&nbsp;Bitao Ma ,&nbsp;Menghan Hu ,&nbsp;Jian Zhang","doi":"10.1016/j.displa.2025.103331","DOIUrl":"10.1016/j.displa.2025.103331","url":null,"abstract":"<div><div>Thermal imaging offers a viable approach for contactless posture monitoring due to its privacy-preserving nature and ability to capture residual thermal patterns. Existing methods exhibit limited generalization capabilities across different materials and thermal decay stages, coupled with a lack of reliable physical interpretability. To address these challenges, this study proposes an integrated paradigm combining generative data augmentation, visual transformer classification, and finite element (FE) simulation. The proposed pipeline first enhances data diversity through a generative model, then employs a Transformer-based classifier to achieve accurate recognition of 9 sitting postures. Finally, a heat conduction model is constructed to simulate the real thermal decay temperature field, decoding the influence of material and time on buttock thermal patterns. Through this paradigm, we identify a critical temperature difference threshold of 2.6 <span><math><mo>±</mo></math></span> 0.06 K, beyond which model performance significantly degrades. Systematic analysis demonstrates that maintaining surface temperatures above this threshold during the initial 30 s enables the model to sustain accuracy above 85%. Furthermore, we quantified the direct impact of material thermophysical parameters on the effective detection window, revealing that materials with lower thermal conductivity (e.g., plastics) extend reliable identification duration. Validation on an independent test set featuring two materials and varying decay durations demonstrated a classification accuracy of 0.9162. This study establishes a thermal imaging-based posture analysis paradigm, providing a theoretical foundation and practical solutions for real-world applications in privacy-sensitive scenarios by decoding buttock thermal patterns. The dataset and code supporting this study are publicly available at: <span><span>https://github.com/AJ-1995/Thermal-Memory-of-Chair</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103331"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145938458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AFFLIE: Adaptive feature fusion for low-light image enhancement AFFLIE:低光图像增强的自适应特征融合
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2026-01-03 DOI: 10.1016/j.displa.2026.103340
Yaxin Lin , Xiaopeng Li , Lian Zou , Liqing Zhou , Cien Fan
Under low illumination, RGB cameras often capture images with significant noise and low visibility, while event cameras, with their high dynamic range characteristic, emerge as a promising solution for improving image quality in the low-light environment by supplementing image details in low-light condition. In this paper, we propose a novel image enhancement framework called AFFLIE, which integrates event and frame-based techniques to improve image quality in low-light conditions. The framework introduces a Multi-scale Spatial-Channel Transformer Encoder (MS-SCTE) to address low-light image noise and event temporal characteristics. Additionally, an Adaptive Feature Fusion Module (AFFM) is proposed to dynamically aggregate features from both image and event streams, enhancing generalization performance. The framework demonstrates superior performance on the SDE, LIE and RELED datasets by enhancing noise reduction and detail preservation.
在低照度条件下,RGB相机拍摄到的图像噪声大、能见度低,而事件相机由于具有高动态范围的特性,通过补充低照度条件下的图像细节来提高低照度环境下的图像质量,是一种很有前景的解决方案。在本文中,我们提出了一种新的图像增强框架AFFLIE,它集成了基于事件和帧的技术来提高低光条件下的图像质量。该框架引入了一个多尺度空间通道变压器编码器(MS-SCTE)来解决低光图像噪声和事件时间特性。此外,提出了一种自适应特征融合模块(AFFM),对图像流和事件流的特征进行动态聚合,提高了泛化性能。该框架通过增强降噪和细节保存功能,在SDE、LIE和RELED数据集上表现出优异的性能。
{"title":"AFFLIE: Adaptive feature fusion for low-light image enhancement","authors":"Yaxin Lin ,&nbsp;Xiaopeng Li ,&nbsp;Lian Zou ,&nbsp;Liqing Zhou ,&nbsp;Cien Fan","doi":"10.1016/j.displa.2026.103340","DOIUrl":"10.1016/j.displa.2026.103340","url":null,"abstract":"<div><div>Under low illumination, RGB cameras often capture images with significant noise and low visibility, while event cameras, with their high dynamic range characteristic, emerge as a promising solution for improving image quality in the low-light environment by supplementing image details in low-light condition. In this paper, we propose a novel image enhancement framework called AFFLIE, which integrates event and frame-based techniques to improve image quality in low-light conditions. The framework introduces a Multi-scale Spatial-Channel Transformer Encoder (MS-SCTE) to address low-light image noise and event temporal characteristics. Additionally, an Adaptive Feature Fusion Module (AFFM) is proposed to dynamically aggregate features from both image and event streams, enhancing generalization performance. The framework demonstrates superior performance on the SDE, LIE and RELED datasets by enhancing noise reduction and detail preservation.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103340"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145938455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finite-element simulation of hierarchical wrinkled surfaces for tunable pretilt control in nematic liquid crystals 向列液晶可调预倾斜控制的分层褶皱表面有限元模拟
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2025-12-01 DOI: 10.1016/j.displa.2025.103312
Jae-Hyun Park , Yu-Ahn Lee , Hae-Chang Jeong , Hong-Gyu Park
This study proposes a next-generation non-contact alignment technique to replace the conventional rubbing process in liquid crystal displays (LCDs). In this theoretical study, a hierarchical wrinkle structure, consisting of superimposed primary and secondary wrinkles on the substrate surface, was introduced, and its effectiveness was systematically analyzed through finite-element method (FEM) simulations. The geometrical parameters of the hierarchical surface were varied to investigate their influence on the liquid crystal (LC) orientation and the electro-optical performance of the device. The results revealed that the macroscopic alignment order parameter is primarily determined by the aspect ratio of the primary wrinkle, whereas the pretilt angle—a key determinant of display performance—can be independently tuned by the aspect ratio of the secondary wrinkle. Based on this decoupled control of alignment characteristics, the optimized structure exhibiting a pretilt of 2.3° achieved a reduction in threshold voltage by approximately 23 % and an enhancement in response speed by about 40 %, compared with a single-wrinkle structure lacking pretilt. These findings establish a scaling relationship between wrinkle geometry and pretilt formation, providing a theoretical foundation and design guideline for high-performance LC devices.
本研究提出了一种新一代非接触对准技术,以取代液晶显示器(lcd)中传统的摩擦工艺。在理论研究中,引入了一种由衬底表面的初级和次级褶皱叠加而成的分层褶皱结构,并通过有限元模拟系统地分析了其有效性。通过改变分层表面的几何参数,研究其对液晶取向和器件电光性能的影响。结果表明,宏观排列顺序参数主要由主皱的纵横比决定,而预倾斜角是显示性能的关键决定因素,可由次皱的纵横比独立调节。基于这种对对准特性的解耦控制,与缺乏预倾角的单皱纹结构相比,具有2.3°预倾角的优化结构的阈值电压降低了约23%,响应速度提高了约40%。这些发现建立了褶皱几何形状与预倾斜形成之间的尺度关系,为高性能LC器件的设计提供了理论基础和指导。
{"title":"Finite-element simulation of hierarchical wrinkled surfaces for tunable pretilt control in nematic liquid crystals","authors":"Jae-Hyun Park ,&nbsp;Yu-Ahn Lee ,&nbsp;Hae-Chang Jeong ,&nbsp;Hong-Gyu Park","doi":"10.1016/j.displa.2025.103312","DOIUrl":"10.1016/j.displa.2025.103312","url":null,"abstract":"<div><div>This study proposes a next-generation non-contact alignment technique to replace the conventional rubbing process in liquid crystal displays (LCDs). In this theoretical study, a hierarchical wrinkle structure, consisting of superimposed primary and secondary wrinkles on the substrate surface, was introduced, and its effectiveness was systematically analyzed through finite-element method (FEM) simulations. The geometrical parameters of the hierarchical surface were varied to investigate their influence on the liquid crystal (LC) orientation and the electro-optical performance of the device. The results revealed that the macroscopic alignment order parameter is primarily determined by the aspect ratio of the primary wrinkle, whereas the pretilt angle—a key determinant of display performance—can be independently tuned by the aspect ratio of the secondary wrinkle. Based on this decoupled control of alignment characteristics, the optimized structure exhibiting a pretilt of 2.3° achieved a reduction in threshold voltage by approximately 23 % and an enhancement in response speed by about 40 %, compared with a single-wrinkle structure lacking pretilt. These findings establish a scaling relationship between wrinkle geometry and pretilt formation, providing a theoretical foundation and design guideline for high-performance LC devices.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103312"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auto-BUSAM: Auto-segmentation of attention-diverted low-contrast breast ultrasound images Auto-BUSAM:自动分割注意力转移低对比度乳房超声图像
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2025-12-03 DOI: 10.1016/j.displa.2025.103314
Xiankui Liu , Musarat Hussain , Ji Huang , Qi Li , Muhammad Tahir Khan , Hongyan Wu
Segmentation of breast ultrasound images is crucial but challenging due to the limited labeled data and low image contrast, which can mislead transformer attention mechanisms and reduce segmentation accuracy. The recent emergence of large models like the Segment Anything Model (SAM) offers new opportunities for segmentation tasks. However, SAM’s reliance on expert-provided prompts and its limitations in handling the low-contrast ultrasound images reduce its effectiveness in medical imaging applications. To address these limitations, we propose Auto-BUSAM, a YOLO-guided adaptation of SAM designed for precise and automated segmentation of low-contrast breast ultrasound images. Our framework introduces two lightweight yet effective innovations: (i) an Automatic Prompt Generator, based on YOLOv8, that automatically detects and generates bounding box prompts that guide SAM’s focus to relevant regions within ultrasound images, minimizing reliance on expert knowledge and reducing manual effort; and (ii) a Low-Rank Approximation attention module that improves feature discrimination and noise filtering by refining SAM’s attention mechanisms in the Mask Decoder. Importantly, our method preserves SAM’s pre-trained generalization ability by freezing the original encoder while fine-tuning only the Mask Decoder with the lightweight modules. Experimental results demonstrate a significant improvement in segmentation accuracy on the BUSI and Dataset B datasets, compared to SAM’s default mode. Our model also significantly outperforms both classical deep learning baselines and other SAM-based frameworks. The code is available at https://github.com/aI-area/Auto-BUSAM.
由于标记数据有限和图像对比度低,乳腺超声图像的分割是至关重要的,但也具有挑战性,这可能会误导变形注意机制,降低分割的准确性。最近出现的大型模型,如细分任意模型(SAM),为细分任务提供了新的机会。然而,SAM依赖于专家提供的提示以及它在处理低对比度超声图像方面的局限性降低了它在医学成像应用中的有效性。为了解决这些限制,我们提出了Auto-BUSAM,这是一种由yolo引导的自适应SAM,旨在精确和自动分割低对比度乳腺超声图像。我们的框架引入了两个轻量级但有效的创新:(i)基于YOLOv8的自动提示生成器,它自动检测并生成边界框提示,引导SAM关注超声图像中的相关区域,最大限度地减少对专家知识的依赖,减少人工工作量;(ii)低秩近似注意模块,通过改进掩码解码器中SAM的注意机制来改进特征识别和噪声过滤。重要的是,我们的方法通过冻结原始编码器而仅使用轻量级模块微调掩码解码器来保留SAM的预训练泛化能力。实验结果表明,与SAM的默认模式相比,BUSI和Dataset B数据集的分割精度有了显着提高。我们的模型也显著优于经典的深度学习基线和其他基于sam的框架。代码可在https://github.com/aI-area/Auto-BUSAM上获得。
{"title":"Auto-BUSAM: Auto-segmentation of attention-diverted low-contrast breast ultrasound images","authors":"Xiankui Liu ,&nbsp;Musarat Hussain ,&nbsp;Ji Huang ,&nbsp;Qi Li ,&nbsp;Muhammad Tahir Khan ,&nbsp;Hongyan Wu","doi":"10.1016/j.displa.2025.103314","DOIUrl":"10.1016/j.displa.2025.103314","url":null,"abstract":"<div><div>Segmentation of breast ultrasound images is crucial but challenging due to the limited labeled data and low image contrast, which can mislead transformer attention mechanisms and reduce segmentation accuracy. The recent emergence of large models like the Segment Anything Model (SAM) offers new opportunities for segmentation tasks. However, SAM’s reliance on expert-provided prompts and its limitations in handling the low-contrast ultrasound images reduce its effectiveness in medical imaging applications. To address these limitations, we propose Auto-BUSAM, a YOLO-guided adaptation of SAM designed for precise and automated segmentation of low-contrast breast ultrasound images. Our framework introduces two lightweight yet effective innovations: (i) an Automatic Prompt Generator, based on YOLOv8, that automatically detects and generates bounding box prompts that guide SAM’s focus to relevant regions within ultrasound images, minimizing reliance on expert knowledge and reducing manual effort; and (ii) a Low-Rank Approximation attention module that improves feature discrimination and noise filtering by refining SAM’s attention mechanisms in the Mask Decoder. Importantly, our method preserves SAM’s pre-trained generalization ability by freezing the original encoder while fine-tuning only the Mask Decoder with the lightweight modules. Experimental results demonstrate a significant improvement in segmentation accuracy on the BUSI and Dataset B datasets, compared to SAM’s default mode. Our model also significantly outperforms both classical deep learning baselines and other SAM-based frameworks. The code is available at <span><span>https://github.com/aI-area/Auto-BUSAM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103314"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and optimization of metastructures for enhanced light extraction in Top-Emitting OLEDs 顶发射oled增强光提取的元结构设计与优化
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2025-11-07 DOI: 10.1016/j.displa.2025.103280
Sungbeom Kim, Jisoo Kyoung
Organic light-emitting diodes (OLEDs) have already become the mainstream display technology in smartphones and televisions, but they are now attracting increasing attention for emerging applications such as augmented reality (AR) and mixed reality (MR) devices. These applications demand extremely high brightness for clear visibility under sunlight, yet the efficiency of top-emitting OLEDs is fundamentally constrained by low light extraction caused by strong optical confinement in their microcavity structures. In this work, we systematically investigated light outcoupling in microcavity-based top-emitting OLEDs using finite-difference time-domain simulations. Several internal metastructures were investigated, and the hemispherical design, when optimized through a particle swarm algorithm, exhibited the greatest improvement, enhancing light extraction by approximately 103.2%. These findings demonstrate that careful design and optimization of metastructures can effectively mitigate optical losses and substantially increase brightness, offering a promising pathway for high-performance OLED displays tailored to AR and MR applications, as well as emerging biomedical devices such as wearable phototherapeutic OLEDs.
有机发光二极管(oled)已经成为智能手机和电视的主流显示技术,但它们现在正在吸引越来越多的新兴应用,如增强现实(AR)和混合现实(MR)设备。这些应用需要极高的亮度才能在阳光下获得清晰的可视性,但顶发射oled的效率从根本上受到微腔结构中强光学约束导致的弱光提取的限制。在这项工作中,我们使用时域有限差分模拟系统地研究了基于微腔的顶发射oled的光脱耦。研究了几种内部元结构,通过粒子群算法优化后的半球形设计表现出最大的改进,提高了约103.2%的光提取。这些发现表明,精心设计和优化元结构可以有效地减轻光学损耗并大幅提高亮度,为AR和MR应用量身定制的高性能OLED显示器以及新兴的生物医学设备(如可穿戴光疗OLED)提供了一条有希望的途径。
{"title":"Design and optimization of metastructures for enhanced light extraction in Top-Emitting OLEDs","authors":"Sungbeom Kim,&nbsp;Jisoo Kyoung","doi":"10.1016/j.displa.2025.103280","DOIUrl":"10.1016/j.displa.2025.103280","url":null,"abstract":"<div><div>Organic light-emitting diodes (OLEDs) have already become the mainstream display technology in smartphones and televisions, but they are now attracting increasing attention for emerging applications such as augmented reality (AR) and mixed reality (MR) devices. These applications demand extremely high brightness for clear visibility under sunlight, yet the efficiency of top-emitting OLEDs is fundamentally constrained by low light extraction caused by strong optical confinement in their microcavity structures. In this work, we systematically investigated light outcoupling in microcavity-based top-emitting OLEDs using finite-difference time-domain simulations. Several internal metastructures were investigated, and the hemispherical design, when optimized through a particle swarm algorithm, exhibited the greatest improvement, enhancing light extraction by approximately 103.2%. These findings demonstrate that careful design and optimization of metastructures can effectively mitigate optical losses and substantially increase brightness, offering a promising pathway for high-performance OLED displays tailored to AR and MR applications, as well as emerging biomedical devices such as wearable phototherapeutic OLEDs.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103280"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145486402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The influence of graphical effects of touch buttons on the visual usability and driving safety of in-vehicle information systems 触摸按钮的图形效果对车载信息系统视觉可用性和驾驶安全性的影响
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2025-11-21 DOI: 10.1016/j.displa.2025.103294
Yuanyang Zuo , Jun Ma , Lijuan Zhou , Zhipeng Hu , Yi Song , Yupeng Wang
Touch screen has become the main interface for drivers to complete secondary tasks in in-vehicle information systems (IVIS), and the interactive input form of clicking touch buttons is the most commonly used interaction behavior in IVIS. However, the recognition and operation of touch buttons will increase the driver’s workload and cause driving distraction, which will affect driving safety. This study aims to reduce driving distraction and improve driving safety and driving experience by designing various touch buttons to improve visual search efficiency and interaction performance. First, we designed 15 schemes for touch buttons, based on a previous theoretical summary and effect screening. Then, using simulated driving, eye-tracking measurement, and user questionnaires, we obtained the data for four evaluation indicators: task, physiological, driving performance, and subjective questionnaire. Finally, the entropy weight method was adopted to evaluate the design comprehensively. The results indicate that touch buttons with dynamic effects of color change, color projection, circle shape, negative polarity, and boundary exhibit better visual usability in the secondary tasks. The proposed scheme in this paper provides suggestions on the visual usability of the touch button design of an automotive intelligent cabin, which is conducive to improving driving safety, task efficiency, and user experience.
触摸屏已经成为驾驶员在车载信息系统(IVIS)中完成次要任务的主要界面,点击触摸按钮的交互输入形式是IVIS中最常用的交互行为。但是,触摸按钮的识别和操作会增加驾驶员的工作量,造成驾驶分心,影响驾驶安全。本研究旨在通过设计各种触摸按钮,提高视觉搜索效率和交互性能,减少驾驶分心,提高驾驶安全性和驾驶体验。首先,在之前理论总结和效果筛选的基础上,我们设计了15种触控按钮方案。然后,采用模拟驾驶、眼动测量、用户问卷等方法,获得任务、生理、驾驶性能、主观问卷四个评价指标的数据。最后,采用熵权法对设计方案进行综合评价。结果表明,具有颜色变化、颜色投影、圆形形状、负极性和边界等动态效果的触摸按钮在次要任务中具有更好的视觉可用性。本文提出的方案为汽车智能座舱触摸按钮设计的视觉可用性提供了建议,有利于提高驾驶安全性、任务效率和用户体验。
{"title":"The influence of graphical effects of touch buttons on the visual usability and driving safety of in-vehicle information systems","authors":"Yuanyang Zuo ,&nbsp;Jun Ma ,&nbsp;Lijuan Zhou ,&nbsp;Zhipeng Hu ,&nbsp;Yi Song ,&nbsp;Yupeng Wang","doi":"10.1016/j.displa.2025.103294","DOIUrl":"10.1016/j.displa.2025.103294","url":null,"abstract":"<div><div>Touch screen has become the main interface for drivers to complete secondary tasks in in-vehicle information systems (IVIS), and the interactive input form of clicking touch buttons is the most commonly used interaction behavior in IVIS. However, the recognition and operation of touch buttons will increase the driver’s workload and cause driving distraction, which will affect driving safety. This study aims to reduce driving distraction and improve driving safety and driving experience by designing various touch buttons to improve visual search efficiency and interaction performance. First, we designed 15 schemes for touch buttons, based on a previous theoretical summary and effect screening. Then, using simulated driving, eye-tracking measurement, and user questionnaires, we obtained the data for four evaluation indicators: task, physiological, driving performance, and subjective questionnaire. Finally, the entropy weight method was adopted to evaluate the design comprehensively. The results indicate that touch buttons with dynamic effects of color change, color projection, circle shape, negative polarity, and boundary exhibit better visual usability in the secondary tasks. The proposed scheme in this paper provides suggestions on the visual usability of the touch button design of an automotive intelligent cabin, which is conducive to improving driving safety, task efficiency, and user experience.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103294"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging the power of eye-tracking for virtual prototype evaluation: a comparison between virtual reality and photorealistic images 利用眼动追踪的力量进行虚拟原型评估:虚拟现实与逼真图像的比较
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2026-01-10 DOI: 10.1016/j.displa.2026.103343
Almudena Palacios-Ibáñez , Manuel F. Contero-López , Santiago Castellet-Lathan , Nathan Hartman , Manuel Contero
Most of the information we gather from our environment is obtained from sight, hence, visual evaluation is vital for assessing products. However, designers have traditionally relied on self-report questionnaires for this purpose, which have proven to be insufficient in some cases. Consequently, physiological measures are being employed to gain a deeper understanding of the cognitive and perceptual processes involved in product evaluation, and, thanks to their integration in Virtual Reality (VR) headsets, they have become a powerful tool for virtual prototype assessment. Still, using virtual prototypes raises some concerns, as previous studies have found that the medium can influence product perception. These results rely solely on self-report techniques, highlighting the need to explore the use of ET for product assessment, which is the main objective of this research. We present two case studies where a group of people assessed through two display mediums (CS-1) a set of furniture comprising a general scene using a ranking-type evaluation (i.e., joint assessment) and (CS-2) two armchairs individually using the Semantic Differential technique. Moreover, the dwell time of the Areas of Interest (AOIs) defined was recorded. Primarily, our results showed that, despite VR being sensitive to aesthetic differences between designs of the same product typology, the medium may still influence the perception of specific product attributes —e.g., fragility (pMODERN < 0.001, pTRADITIONAL = 0.002)—, and observation of specific AOIs —e.g., AOI1 (pMODERN = 0.003, pTRADITIONAL < 0.001), AOI9 and AOI10 (p < 0.001). At the same time, no differences were found in the perception of the general scene, whereas dwell time was influenced for AOI1 (p = 0.003), AOI4 (p = 0.006), and AOI5 (<.001). Additionally, the university of origin may also be a factor influencing product evaluation, while confidence in the response was not affected by the medium. Hence, this study contributes to a deeper understanding of how the medium influences product perception by employing ET with self-report methods, offering valuable insights into user behavior.
我们从环境中收集的大部分信息都是通过视觉获得的,因此,视觉评估对于评估产品至关重要。然而,设计师传统上依靠自我报告问卷来达到这个目的,这在某些情况下被证明是不够的。因此,生理测量被用于更深入地了解产品评估中涉及的认知和感知过程,并且由于它们集成在虚拟现实(VR)头显中,它们已成为虚拟原型评估的强大工具。然而,使用虚拟原型引起了一些担忧,因为之前的研究发现,这种媒介会影响产品的感知。这些结果完全依赖于自我报告技术,突出了探索使用ET进行产品评估的必要性,这是本研究的主要目标。我们提出了两个案例研究,其中一组人通过两种显示媒介(CS-1)评估一套家具,包括使用排名类型评估(即联合评估)的一般场景,(CS-2)使用语义差异技术分别评估两把扶手椅。此外,还记录了所定义的兴趣区域(aoi)的停留时间。首先,我们的研究结果表明,尽管VR对相同产品类型的设计之间的美学差异很敏感,但媒介仍然可能影响对特定产品属性的感知。,脆弱性(pMODERN < 0.001, pTRADITIONAL = 0.002) -和特定aoi的观察-例如;, AOI1 (pMODERN = 0.003, pTRADITIONAL < 0.001), AOI9和AOI10 (p < 0.001)。同时,AOI1 (p = 0.003)、AOI4 (p = 0.006)和AOI5 (< 001)的停留时间受到AOI1 (p = 0.003)、AOI4 (p = 0.006)和AOI5的影响。此外,原产大学也可能是影响产品评价的一个因素,而对反应的信心不受媒介的影响。因此,本研究通过使用ET和自我报告方法,有助于更深入地了解媒体如何影响产品感知,为用户行为提供有价值的见解。
{"title":"Leveraging the power of eye-tracking for virtual prototype evaluation: a comparison between virtual reality and photorealistic images","authors":"Almudena Palacios-Ibáñez ,&nbsp;Manuel F. Contero-López ,&nbsp;Santiago Castellet-Lathan ,&nbsp;Nathan Hartman ,&nbsp;Manuel Contero","doi":"10.1016/j.displa.2026.103343","DOIUrl":"10.1016/j.displa.2026.103343","url":null,"abstract":"<div><div>Most of the information we gather from our environment is obtained from sight, hence, visual evaluation is vital for assessing products. However, designers have traditionally relied on self-report questionnaires for this purpose, which have proven to be insufficient in some cases. Consequently, physiological measures are being employed to gain a deeper understanding of the cognitive and perceptual processes involved in product evaluation, and, thanks to their integration in Virtual Reality (VR) headsets, they have become a powerful tool for virtual prototype assessment. Still, using virtual prototypes raises some concerns, as previous studies have found that the medium can influence product perception. These results rely solely on self-report techniques, highlighting the need to explore the use of ET for product assessment, which is the main objective of this research. We present two case studies where a group of people assessed through two display mediums (CS-1) a set of furniture comprising a general scene using a ranking-type evaluation (i.e., joint assessment) and (CS-2) two armchairs individually using the Semantic Differential technique. Moreover, the dwell time of the Areas of Interest (AOIs) defined was recorded. Primarily, our results showed that, despite VR being sensitive to aesthetic differences between designs of the same product typology, the medium may still influence the perception of specific product attributes —e.g., fragility (p<sub>MODERN</sub> &lt; 0.001, p<sub>TRADITIONAL</sub> = 0.002)—, and observation of specific AOIs —e.g., AOI1 (p<sub>MODERN</sub> = 0.003, p<sub>TRADITIONAL</sub> &lt; 0.001), AOI9 and AOI10 (p &lt; 0.001). At the same time, no differences were found in the perception of the general scene, whereas dwell time was influenced for AOI1 (p = 0.003), AOI4 (p = 0.006), and AOI5 (&lt;.001). Additionally, the university of origin may also be a factor influencing product evaluation, while confidence in the response was not affected by the medium. Hence, this study contributes to a deeper understanding of how the medium influences product perception by employing ET with self-report methods, offering valuable insights into user behavior.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103343"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blind tone-mapped omnidirectional image quality assessment via joint distortion perception and visual perception learning 基于联合失真感知和视觉感知学习的盲色调映射全方位图像质量评估
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2025-12-13 DOI: 10.1016/j.displa.2025.103324
Chongchong Jin , Zikang Chen , Guobing Zhou , Zhouyan He , Yang Song , Yeyao Chen , Ting Luo
With the continuous advancement of display technologies, users can view High Dynamic Range Omnidirectional Images (HOIs) with a wide field of view and high contrast. However, when displayed on standard dynamic range devices, these HOIs usually undergo tone mapping, which may introduce complex visual distortions. Existing image quality assessment methods rarely consider the joint distortions caused by tone mapping and omnidirectional characteristics, making it difficult to effectively evaluate the perceptual quality of tone-mapped HOIs. Establishing a deep learning-based Tone-Mapped Omnidirectional Image Quality Assessment (TMOIQA) method is therefore crucial for monitoring content quality and promoting the industrial application of HOI systems. To this end, we propose a blind TMOIQA method based on joint distortion perception and visual perception learning. Specifically, in response to the omnidirectional characteristics of HOIs, an improved Equi-Angular Cube (EAC) projection is used to transform HOIs into viewports for input. These viewports are then processed by a dual-branch network: A distortion difference perception branch, employing a Difference Image Estimation Network (DIEN) and Distortion Difference Perception Network (DDPN), captures distortion-related features (objective quantification); A visual quality perception branch, through the Visual Quality Perception Network (VQPN), extracts visual-related features (subjective experience). Additionally, a Viewport Relationship Modeling Network (VRMN) integrates spatial dependencies among viewports to provide a more accurate overall quality prediction. Extensive experiments on the benchmark database NBU-HOID demonstrate that the proposed TMOIQA method outperforms state-of-the-art methods.
随着显示技术的不断进步,用户可以看到具有宽视场和高对比度的高动态范围全方位图像(HOIs)。然而,当在标准动态范围设备上显示时,这些hoi通常经过色调映射,这可能会引入复杂的视觉扭曲。现有的图像质量评估方法很少考虑色调映射和全向特征引起的联合畸变,难以有效评估色调映射的图像感知质量。因此,建立基于深度学习的色调映射全方位图像质量评估(TMOIQA)方法对于监测内容质量和促进HOI系统的工业应用至关重要。为此,我们提出了一种基于联合畸变感知和视觉感知学习的盲TMOIQA方法。具体而言,针对hoi的全向特性,采用改进的等角立方投影(EAC)将hoi转换为视口进行输入。然后通过双分支网络处理这些视口:畸变差分感知分支,采用差分图像估计网络(DIEN)和畸变差分感知网络(dddn),捕获畸变相关特征(客观量化);视觉质量感知分支通过视觉质量感知网络(visual quality perception Network, VQPN)提取视觉相关特征(主观体验)。此外,视口关系建模网络(VRMN)集成了视口之间的空间依赖关系,以提供更准确的整体质量预测。在基准数据库NBU-HOID上的大量实验表明,所提出的TMOIQA方法优于目前最先进的方法。
{"title":"Blind tone-mapped omnidirectional image quality assessment via joint distortion perception and visual perception learning","authors":"Chongchong Jin ,&nbsp;Zikang Chen ,&nbsp;Guobing Zhou ,&nbsp;Zhouyan He ,&nbsp;Yang Song ,&nbsp;Yeyao Chen ,&nbsp;Ting Luo","doi":"10.1016/j.displa.2025.103324","DOIUrl":"10.1016/j.displa.2025.103324","url":null,"abstract":"<div><div>With the continuous advancement of display technologies, users can view High Dynamic Range Omnidirectional Images (HOIs) with a wide field of view and high contrast. However, when displayed on standard dynamic range devices, these HOIs usually undergo tone mapping, which may introduce complex visual distortions. Existing image quality assessment methods rarely consider the joint distortions caused by tone mapping and omnidirectional characteristics, making it difficult to effectively evaluate the perceptual quality of tone-mapped HOIs. Establishing a deep learning-based Tone-Mapped Omnidirectional Image Quality Assessment (TMOIQA) method is therefore crucial for monitoring content quality and promoting the industrial application of HOI systems. To this end, we propose a blind TMOIQA method based on joint distortion perception and visual perception learning. Specifically, in response to the omnidirectional characteristics of HOIs, an improved Equi-Angular Cube (EAC) projection is used to transform HOIs into viewports for input. These viewports are then processed by a dual-branch network: A distortion difference perception branch, employing a Difference Image Estimation Network (DIEN) and Distortion Difference Perception Network (DDPN), captures distortion-related features (objective quantification); A visual quality perception branch, through the Visual Quality Perception Network (VQPN), extracts visual-related features (subjective experience). Additionally, a Viewport Relationship Modeling Network (VRMN) integrates spatial dependencies among viewports to provide a more accurate overall quality prediction. Extensive experiments on the benchmark database NBU-HOID demonstrate that the proposed TMOIQA method outperforms state-of-the-art methods.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103324"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-stability low-power pixel circuit with LTPO inverter for high-refresh mini LED displays 高稳定低功耗像素电路与LTPO逆变器用于高刷新迷你LED显示屏
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2025-12-14 DOI: 10.1016/j.displa.2025.103325
Baozhen Ma , Wei Cai , Ruhai Guo , Haoming He , Yuanjun Guo , Zhenhuai Yang , Yilu Yang , Lei Zeng , Qiang Hu , Honglong Ning
The advancement of automotive displays, gaming monitors, direct-view LED TVs, and 3D displays demands Mini LED technology with higher resolution and refresh rates. This paper proposes a pulse-width modulation (PWM) driven low-temperature polycrystalline oxide (LTPO) pixel circuit, and conducted functional simulations, comparing the proposed circuit with existing LTPO TFT-based and metal-oxide(MO) TFT-based designs. Results demonstrated a current stability exceeding 99 % during illumination phase, with current rise and fall times of less than 0.1 μs and less than 0.3 μs, respectively, enabling for both high-refresh-rate (fast conversion) and low-refresh-rate (high stability) applications. A 320 × RGB × 240 Mini LED display with 0.45 mm pitch was fabricated for validation. Spectral analysis of blue LEDs at 255 grayscales revealed a peak wavelength shift less than 1 nm, indicating negligible color deviation. Results show that the proposed pixel circuit achieves frame-by-frame driving with high output current stability, low power consumption, and high refresh capabilities for Mini LED displays, effectively suppressing flicker artifacts while maintaining precise synchronization.
汽车显示器、游戏显示器、直视LED电视和3D显示器的进步要求具有更高分辨率和刷新率的Mini LED技术。本文提出了一种脉宽调制(PWM)驱动的低温多晶氧化物(LTPO)像素电路,并进行了功能仿真,将所提出的电路与现有的基于LTPO tft和基于金属氧化物(MO) tft的设计进行了比较。结果表明,在照明阶段,电流稳定性超过99%,电流上升和下降时间分别小于0.1 μs和0.3 μs,可以实现高刷新率(快速转换)和低刷新率(高稳定性)的应用。制作了一个间距为0.45 mm的320 × RGB × 240 Mini LED显示屏进行验证。蓝色led在255灰度下的光谱分析显示,峰值波长位移小于1nm,表明可以忽略不计的颜色偏差。结果表明,所提出的像素电路实现了Mini LED显示屏逐帧驱动,具有高输出电流稳定性、低功耗和高刷新能力,在保持精确同步的同时有效抑制闪烁伪影。
{"title":"High-stability low-power pixel circuit with LTPO inverter for high-refresh mini LED displays","authors":"Baozhen Ma ,&nbsp;Wei Cai ,&nbsp;Ruhai Guo ,&nbsp;Haoming He ,&nbsp;Yuanjun Guo ,&nbsp;Zhenhuai Yang ,&nbsp;Yilu Yang ,&nbsp;Lei Zeng ,&nbsp;Qiang Hu ,&nbsp;Honglong Ning","doi":"10.1016/j.displa.2025.103325","DOIUrl":"10.1016/j.displa.2025.103325","url":null,"abstract":"<div><div>The advancement of automotive displays, gaming monitors, direct-view LED TVs, and 3D displays demands Mini LED technology with higher resolution and refresh rates. This paper proposes a pulse-width modulation (PWM) driven low-temperature polycrystalline oxide (LTPO) pixel circuit, and conducted functional simulations, comparing the proposed circuit with existing LTPO TFT-based and metal-oxide(MO) TFT-based designs. Results demonstrated a current stability exceeding 99 % during illumination phase, with current rise and fall times of less than 0.1 μs and less than 0.3 μs, respectively, enabling for both high-refresh-rate (fast conversion) and low-refresh-rate (high stability) applications. A 320 × RGB × 240 Mini LED display with 0.45 mm pitch was fabricated for validation.<!--> <!-->Spectral analysis of blue LEDs at 255 grayscales revealed a peak wavelength shift less than 1 nm, indicating negligible color deviation. Results show that the proposed pixel circuit achieves frame-by-frame driving with high output current stability, low power consumption, and high refresh capabilities for Mini LED displays, effectively suppressing flicker artifacts while maintaining precise synchronization.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103325"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Luminance evaluation and control method for glass curtain wall LED media facade displays based on human visual perception 基于人眼视觉感知的玻璃幕墙LED媒体立面显示亮度评价与控制方法
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-04-01 Epub Date: 2025-12-28 DOI: 10.1016/j.displa.2025.103335
Bo Wang , Yuan Chen , Ayin Yan , Kepan Xu , Wenhao Bao , Yu Luo , Wenqing Xie , Xinshuo Zhang , Ying He
The proliferation of glass curtain wall LED media facade displays (G-LMDs) is transforming urban night environments but also introducing significant visual discomfort to observers and contributing to light pollution through glare, sky glow, and light intrusion. These impacts arise from the outward-facing luminous light sources mounted on building facades, which generate high luminance and strong luminance contrast at night. Existing building facade luminance standards, which were formulated for floodlighting, cannot evaluate and guide this new typology. This study proposes a novel “point-line-surface” luminance evaluation method that integrates the lighting characteristics of G-LMDs with human luminance perception properties. We quantified the human perceptual impact of G-LMDs by conducting luminance tests on their point and line sources from an observer’s perspective and converting the results into equivalent surface-source luminance. A key finding is that comfortable surface luminance is less influenced by array spacing and type, demonstrating high stability, which supports its use as a reliable metric for evaluating and controlling G-LMDs luminance. Based on this stability and its variability with ambient luminance, this study proposes G-LMDs luminance control values of 70 cd/m2, 65 cd/m2, and 50 cd/m2 for high, medium, and low ambient luminance, respectively. Furthermore, we invert the evaluation method into a practical “surface-line-point” design strategy to translate perceptual luminance targets into actionable lighting parameters, offering specific recommendations for different ambient luminance conditions. The proposed evaluation and control method and design strategy offer practical guidance for the design of urban G-LMDs and presents a viable strategy for mitigating urban light pollution and supporting landscape management.
玻璃幕墙LED媒体立面显示器(g - lmd)的扩散正在改变城市的夜间环境,但也给观察者带来了严重的视觉不适,并通过眩光、天空辉光和光侵入造成光污染。这些影响来自于安装在建筑立面上的朝外发光光源,这些光源在夜间产生高亮度和强烈的亮度对比。现有的建筑立面亮度标准,是为泛光灯制定的,不能评估和指导这种新的类型。本研究提出了一种新颖的“点-线-面”亮度评估方法,该方法将g - lmd的照明特性与人类的亮度感知特性相结合。我们从观察者的角度对g - lmd的点光源和线光源进行亮度测试,并将结果转换为等效的表面光源亮度,从而量化了g - lmd对人类感知的影响。一个关键的发现是,舒适的表面亮度受阵列间距和类型的影响较小,显示出高稳定性,这支持它作为评估和控制g - lmd亮度的可靠度量。基于这种稳定性及其随环境亮度的变化,本研究提出了高、中、低环境亮度下G-LMDs的亮度控制值分别为70 cd/m2、65 cd/m2和50 cd/m2。此外,我们将评估方法转化为实用的“面-线-点”设计策略,将感知亮度目标转化为可操作的照明参数,为不同的环境亮度条件提供具体建议。本文提出的评价控制方法和设计策略为城市光污染的缓解和景观管理提供了切实可行的策略。
{"title":"Luminance evaluation and control method for glass curtain wall LED media facade displays based on human visual perception","authors":"Bo Wang ,&nbsp;Yuan Chen ,&nbsp;Ayin Yan ,&nbsp;Kepan Xu ,&nbsp;Wenhao Bao ,&nbsp;Yu Luo ,&nbsp;Wenqing Xie ,&nbsp;Xinshuo Zhang ,&nbsp;Ying He","doi":"10.1016/j.displa.2025.103335","DOIUrl":"10.1016/j.displa.2025.103335","url":null,"abstract":"<div><div>The proliferation of glass curtain wall LED media facade displays (G-LMDs) is transforming urban night environments but also introducing significant visual discomfort to observers and contributing to light pollution through glare, sky glow, and light intrusion. These impacts arise from the outward-facing luminous light sources mounted on building facades, which generate high luminance and strong luminance contrast at night. Existing building facade luminance standards, which were formulated for floodlighting, cannot evaluate and guide this new typology. This study proposes a novel “point-line-surface” luminance evaluation method that integrates the lighting characteristics of G-LMDs with human luminance perception properties. We quantified the human perceptual impact of G-LMDs by conducting luminance tests on their point and line sources from an observer’s perspective and converting the results into equivalent surface-source luminance. A key finding is that comfortable surface luminance is less influenced by array spacing and type, demonstrating high stability, which supports its use as a reliable metric for evaluating and controlling G-LMDs luminance. Based on this stability and its variability with ambient luminance, this study proposes G-LMDs luminance control values of 70 cd/m<sup>2</sup>, 65 cd/m<sup>2</sup>, and 50 cd/m<sup>2</sup> for high, medium, and low ambient luminance, respectively. Furthermore, we invert the evaluation method into a practical “surface-line-point” design strategy to translate perceptual luminance targets into actionable lighting parameters, offering specific recommendations for different ambient luminance conditions. The proposed evaluation and control method and design strategy offer practical guidance for the design of urban G-LMDs and presents a viable strategy for mitigating urban light pollution and supporting landscape management.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103335"},"PeriodicalIF":3.4,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145883335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1