首页 > 最新文献

Displays最新文献

英文 中文
Enhanced white efficiency using film color filter via internal reflectance control by capping and refractive index matching layers for rigid OLED panels 采用薄膜滤色片,通过覆盖层和折射率匹配层控制内部反射率,提高了硬OLED面板的白色效率
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-09 DOI: 10.1016/j.displa.2026.103345
Horyun Chung , Eunjae Na , Myunghwan Kim , Sungguk An , Yeong Hwan Ko , Jae Su Yu
To enhance external luminous efficiency and reduce power consumption in rigid top-emitting organic light-emitting diode (OLED) panels for mobile applications, a film color filter was introduced as a promising alternative for conventional polarizers. The film color filter exhibited higher transmittance in the red, green, and blue emission wavelength regions of OLEDs compared to the polarizer, thereby improving external luminous efficiency. However, its application also increases reflectance due to external light, which necessitates optimization strategies to mitigate this drawback. To address this issue, the internal reflection within the OLED panel was reduced by optimizing the capping layer (CPL) thickness from 60 to 40 nm. Additionally, a refractive index matching layer was implemented between the encapsulation glass and the CPL, resulting in a 24.5% reduction in the specular component included (SCI) reflectance and a decrease in the absolute value of the specular component excluded (SCE) reflection color coordinate. White efficiency typically decreases with the reduction of the CPL thickness; however, the Device B exhibited improvements of 13.7%, 16.8%, and 12.4% in white efficiency compared to the polarizer at the CPL thicknesses of 40, 50, and 60 nm, respectively. This enhancement was particularly pronounced in the blue emission region, where the luminous efficiency is inherently lower. These findings indicate that optimizing the CPL thickness to 40 nm in conjunction with the Device B effectively reduces SCI reflectance, improves SCE reflection color coordinate, and enhances white efficiency. This study demonstrates that replacing the conventional polarizer with a film color filter is a viable approach to achieving higher luminous efficiency in rigid top-emitting OLED panels for mobile devices.
为了提高用于移动应用的刚性顶发射有机发光二极管(OLED)面板的外部发光效率并降低功耗,引入了一种薄膜彩色滤光片,作为传统偏光片的一种有前途的替代方案。与偏光镜相比,薄膜滤色片在oled的红、绿、蓝发射波长区域具有更高的透过率,从而提高了外发光效率。然而,由于外部光线的影响,它的应用也增加了反射率,这就需要优化策略来减轻这一缺点。为了解决这个问题,通过将封盖层(CPL)厚度从60 nm优化到40 nm,从而减少了OLED面板内的内部反射。此外,在封装玻璃和CPL之间添加折射率匹配层,使得包含镜面分量(SCI)反射率降低24.5%,排除镜面分量(SCE)反射颜色坐标绝对值降低。白效率随CPL厚度的减小而减小;然而,与CPL厚度为40nm、50nm和60nm的偏振器相比,器件B的白色效率分别提高了13.7%、16.8%和12.4%。这种增强在发光效率固有较低的蓝色发射区尤为明显。上述结果表明,将CPL厚度优化至40 nm,结合Device B可有效降低SCI反射率,改善SCE反射色坐标,提高白色效率。该研究表明,用薄膜彩色滤光片取代传统的偏光片是一种可行的方法,可以在移动设备的刚性顶发射OLED面板上实现更高的发光效率。
{"title":"Enhanced white efficiency using film color filter via internal reflectance control by capping and refractive index matching layers for rigid OLED panels","authors":"Horyun Chung ,&nbsp;Eunjae Na ,&nbsp;Myunghwan Kim ,&nbsp;Sungguk An ,&nbsp;Yeong Hwan Ko ,&nbsp;Jae Su Yu","doi":"10.1016/j.displa.2026.103345","DOIUrl":"10.1016/j.displa.2026.103345","url":null,"abstract":"<div><div>To enhance external luminous efficiency and reduce power consumption in rigid top-emitting organic light-emitting diode (OLED) panels for mobile applications, a film color filter was introduced as a promising alternative for conventional polarizers. The film color filter exhibited higher transmittance in the red, green, and blue emission wavelength regions of OLEDs compared to the polarizer, thereby improving external luminous efficiency. However, its application also increases reflectance due to external light, which necessitates optimization strategies to mitigate this drawback. To address this issue, the internal reflection within the OLED panel was reduced by optimizing the capping layer (CPL) thickness from 60 to 40 nm. Additionally, a refractive index matching layer was implemented between the encapsulation glass and the CPL, resulting in a 24.5% reduction in the specular component included (SCI) reflectance and a decrease in the absolute value of the specular component excluded (SCE) reflection color coordinate. White efficiency typically decreases with the reduction of the CPL thickness; however, the Device B exhibited improvements of 13.7%, 16.8%, and 12.4% in white efficiency compared to the polarizer at the CPL thicknesses of 40, 50, and 60 nm, respectively. This enhancement was particularly pronounced in the blue emission region, where the luminous efficiency is inherently lower. These findings indicate that optimizing the CPL thickness to 40 nm in conjunction with the Device B effectively reduces SCI reflectance, improves SCE reflection color coordinate, and enhances white efficiency. This study demonstrates that replacing the conventional polarizer with a film color filter is a viable approach to achieving higher luminous efficiency in rigid top-emitting OLED panels for mobile devices.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103345"},"PeriodicalIF":3.4,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated prompt-guided multi-modality cell segmentation with shape-aware classification and boundary-aware SAM adaptation 具有形状感知分类和边界感知SAM适应的自动提示引导多模态细胞分割
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-07 DOI: 10.1016/j.displa.2025.103337
Deboch Eyob Abera , Jiaye He , Jia Liu , Nazar Zaki , Wenjian Qin
Robust and accurate cell segmentation across diverse imaging modalities remains a critical challenge in microscopy image analysis. While foundation models like the Segment Anything Model (SAM) have demonstrated exceptional performance in natural image segmentation, their adaptation to multi-modal cellular analysis is hindered by domain-specific knowledge gaps and morphological complexity. To bridge this gap, we present a novel SAM-driven framework featuring three systematic innovations: First, we propose Shape-Aware Classification to enhance segmentation of cells with diverse morphologies. Second, Auto Point Prompt Generation (APPGen) module guides the segmentation model with automatically generated point cues to improve segmentation accuracy. Third, we implement Boundary-Aware SAM Adaptation to effectively resolve overlapping cells in microscopy images. Our experiments show that the proposed framework reduces manual effort through automated prompts, adapts well to different imaging modalities, and enhances segmentation accuracy by incorporating boundary-aware techniques. The source code is available at https://github.com/MIXAILAB/Multi_Modality_CellSeg.
鲁棒和准确的细胞分割跨越不同的成像模式仍然是显微镜图像分析的关键挑战。虽然像SAM这样的基础模型在自然图像分割中表现出色,但它们对多模态细胞分析的适应受到领域特定知识差距和形态复杂性的阻碍。为了弥补这一差距,我们提出了一种新的sam驱动框架,该框架具有三个系统创新:首先,我们提出了形状感知分类来增强具有不同形态的细胞的分割。其次,自动点提示生成(APPGen)模块以自动生成的点提示引导分割模型,提高分割精度。第三,我们实现了边界感知的SAM自适应,有效地解决了显微镜图像中的重叠细胞。我们的实验表明,该框架通过自动提示减少了人工工作量,很好地适应了不同的成像模式,并通过结合边界感知技术提高了分割精度。源代码可从https://github.com/MIXAILAB/Multi_Modality_CellSeg获得。
{"title":"Automated prompt-guided multi-modality cell segmentation with shape-aware classification and boundary-aware SAM adaptation","authors":"Deboch Eyob Abera ,&nbsp;Jiaye He ,&nbsp;Jia Liu ,&nbsp;Nazar Zaki ,&nbsp;Wenjian Qin","doi":"10.1016/j.displa.2025.103337","DOIUrl":"10.1016/j.displa.2025.103337","url":null,"abstract":"<div><div>Robust and accurate cell segmentation across diverse imaging modalities remains a critical challenge in microscopy image analysis. While foundation models like the Segment Anything Model (SAM) have demonstrated exceptional performance in natural image segmentation, their adaptation to multi-modal cellular analysis is hindered by domain-specific knowledge gaps and morphological complexity. To bridge this gap, we present a novel SAM-driven framework featuring three systematic innovations: First, we propose Shape-Aware Classification to enhance segmentation of cells with diverse morphologies. Second, Auto Point Prompt Generation (APPGen) module guides the segmentation model with automatically generated point cues to improve segmentation accuracy. Third, we implement Boundary-Aware SAM Adaptation to effectively resolve overlapping cells in microscopy images. Our experiments show that the proposed framework reduces manual effort through automated prompts, adapts well to different imaging modalities, and enhances segmentation accuracy by incorporating boundary-aware techniques. The source code is available at <span><span>https://github.com/MIXAILAB/Multi_Modality_CellSeg</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103337"},"PeriodicalIF":3.4,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145938457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An optimized convolutional neural network based on multi-strategy grey wolf optimizer to identify crop diseases and pests 基于多策略灰狼优化器的优化卷积神经网络作物病虫害识别
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-06 DOI: 10.1016/j.displa.2026.103341
Xiaobing Yu , Hongqian Zhang , Yuchen Duan , Xuming Wang
Agriculture plays a crucial role in national food security, with crop diseases and pests being major threats to agricultural sustainability. Traditional detection methods are labor-intensive, subjective, and often inaccurate. Recent advancements in deep learning have significantly improved image-based recognition; however, the performance of convolutional neural networks (CNNs) is highly dependent on hyperparameter tuning, which remains a challenging task. To address this issue, this study proposes a multi-strategy grey wolf optimizer (MGWO) to enhance CNN hyperparameter optimization. MGWO improves the global search efficiency of the conventional grey wolf optimizer (GWO), enabling automatic selection of optimal hyperparameters. The proposed approach is evaluated on corn disease and Pentatomidae stinkbug pest classification, comparing its performance against a baseline CNN model and six other optimization algorithms. Experimental results show that MGWO achieves 95.71% accuracy on the corn disease dataset and 94.46% on the pest dataset, outperforming all competing methods.
These findings demonstrate the potential of MGWO in optimizing deep learning models for agricultural applications, providing a robust and automated solution for crop disease and pest recognition.
农业在国家粮食安全中发挥着至关重要的作用,作物病虫害是农业可持续性的主要威胁。传统的检测方法是劳动密集型的,主观的,而且往往不准确。深度学习的最新进展显著改善了基于图像的识别;然而,卷积神经网络(cnn)的性能高度依赖于超参数调谐,这仍然是一个具有挑战性的任务。为了解决这一问题,本研究提出了一种多策略灰狼优化器(MGWO)来增强CNN的超参数优化。MGWO提高了传统灰狼优化器的全局搜索效率,实现了最优超参数的自动选择。在玉米病害和蝽科臭虫分类中对该方法进行了评估,并将其与基线CNN模型和其他六种优化算法的性能进行了比较。实验结果表明,MGWO在玉米病害数据集上的准确率为95.71%,在害虫数据集上的准确率为94.46%,优于所有竞争方法。这些发现证明了MGWO在优化农业应用深度学习模型方面的潜力,为作物病虫害识别提供了强大的自动化解决方案。
{"title":"An optimized convolutional neural network based on multi-strategy grey wolf optimizer to identify crop diseases and pests","authors":"Xiaobing Yu ,&nbsp;Hongqian Zhang ,&nbsp;Yuchen Duan ,&nbsp;Xuming Wang","doi":"10.1016/j.displa.2026.103341","DOIUrl":"10.1016/j.displa.2026.103341","url":null,"abstract":"<div><div>Agriculture plays a crucial role in national food security, with crop diseases and pests being major threats to agricultural sustainability. Traditional detection methods are labor-intensive, subjective, and often inaccurate. Recent advancements in deep learning have significantly improved image-based recognition; however, the performance of convolutional neural networks (CNNs) is highly dependent on hyperparameter tuning, which remains a challenging <span><span>task. To</span><svg><path></path></svg></span> address this issue, this study proposes a multi-strategy grey wolf optimizer (MGWO) to enhance CNN hyperparameter optimization. MGWO improves the global search efficiency of the conventional grey wolf optimizer (GWO), enabling automatic selection of optimal hyperparameters. The proposed approach is evaluated on corn disease and Pentatomidae stinkbug pest classification, comparing its performance against a baseline CNN model and six other optimization algorithms. Experimental results show that MGWO achieves 95.71% accuracy on the corn disease dataset and 94.46% on the pest dataset, outperforming all competing methods.</div><div>These findings demonstrate the potential of MGWO in optimizing deep learning models for agricultural applications, providing a robust and automated solution for crop disease and pest recognition.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103341"},"PeriodicalIF":3.4,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145938459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parameter-efficient fine-tuning for no-reference image quality assessment: Empirical studies on vision transformer 无参考图像质量评估的参数高效微调:视觉变压器的实证研究
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-05 DOI: 10.1016/j.displa.2026.103339
GuangLu Sun, Kaiwei Lei, Tianlin Li, Linsen Yu, Suxia Zhu
Parameter-Efficient Fine-Tuning (PEFT) is a transfer learning technique designed to adapt pre-trained models to downstream tasks while minimizing parameter and computational complexity. In recent years, No-Reference Image Quality Assessment (NR-IQA) methods based on pre-trained visual models have achieved significant progress. However, most of these methods rely on full fine-tuning, which requires substantial computational and memory resources. A natural question arises: can PEFT techniques achieve parameter-efficient NR-IQA with good performance? To explore this, we perform empirical studies using several PEFT methods on pre-trained Vision Transformer (ViT) model. Specifically, we select three PEFT approaches – adapter tuning, prompt tuning, and partial tuning – that have proven effective in general vision tasks, and investigate whether they can achieve performance comparable to traditional visual NR-IQA models. Among them, which is the most effective? Furthermore, we examine the impact of four key factors on the results: fine-tuning position, parameter configuration, layer selection strategy, and the scale of pre-trained weights. Finally, we evaluate whether the optimal PEFT strategy on ViT can be generalized to other Transformer-based architectures. This work offers valuable insights and practical guidance for future research on PEFT methods in NR-IQA tasks.
参数有效微调(PEFT)是一种迁移学习技术,旨在使预训练模型适应下游任务,同时最小化参数和计算复杂性。近年来,基于预训练视觉模型的无参考图像质量评估(NR-IQA)方法取得了重大进展。然而,这些方法中的大多数依赖于完全微调,这需要大量的计算和内存资源。一个自然的问题出现了:PEFT技术能否以良好的性能实现参数高效的NR-IQA ?为了探讨这一点,我们使用几种PEFT方法对预训练的视觉变压器(ViT)模型进行了实证研究。具体来说,我们选择了三种PEFT方法——适配器调优、提示调优和部分调优——它们在一般视觉任务中被证明是有效的,并研究它们是否能达到与传统视觉NR-IQA模型相当的性能。其中,哪一种最有效?此外,我们研究了四个关键因素对结果的影响:微调位置、参数配置、层选择策略和预训练权重的规模。最后,我们评估了ViT上的最优PEFT策略是否可以推广到其他基于变压器的体系结构。这项工作为未来在NR-IQA任务中PEFT方法的研究提供了有价值的见解和实践指导。
{"title":"Parameter-efficient fine-tuning for no-reference image quality assessment: Empirical studies on vision transformer","authors":"GuangLu Sun,&nbsp;Kaiwei Lei,&nbsp;Tianlin Li,&nbsp;Linsen Yu,&nbsp;Suxia Zhu","doi":"10.1016/j.displa.2026.103339","DOIUrl":"10.1016/j.displa.2026.103339","url":null,"abstract":"<div><div>Parameter-Efficient Fine-Tuning (PEFT) is a transfer learning technique designed to adapt pre-trained models to downstream tasks while minimizing parameter and computational complexity. In recent years, No-Reference Image Quality Assessment (NR-IQA) methods based on pre-trained visual models have achieved significant progress. However, most of these methods rely on full fine-tuning, which requires substantial computational and memory resources. A natural question arises: can PEFT techniques achieve parameter-efficient NR-IQA with good performance? To explore this, we perform empirical studies using several PEFT methods on pre-trained Vision Transformer (ViT) model. Specifically, we select three PEFT approaches – adapter tuning, prompt tuning, and partial tuning – that have proven effective in general vision tasks, and investigate whether they can achieve performance comparable to traditional visual NR-IQA models. Among them, which is the most effective? Furthermore, we examine the impact of four key factors on the results: fine-tuning position, parameter configuration, layer selection strategy, and the scale of pre-trained weights. Finally, we evaluate whether the optimal PEFT strategy on ViT can be generalized to other Transformer-based architectures. This work offers valuable insights and practical guidance for future research on PEFT methods in NR-IQA tasks.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103339"},"PeriodicalIF":3.4,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145938456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AFFLIE: Adaptive feature fusion for low-light image enhancement AFFLIE:低光图像增强的自适应特征融合
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-03 DOI: 10.1016/j.displa.2026.103340
Yaxin Lin , Xiaopeng Li , Lian Zou , Liqing Zhou , Cien Fan
Under low illumination, RGB cameras often capture images with significant noise and low visibility, while event cameras, with their high dynamic range characteristic, emerge as a promising solution for improving image quality in the low-light environment by supplementing image details in low-light condition. In this paper, we propose a novel image enhancement framework called AFFLIE, which integrates event and frame-based techniques to improve image quality in low-light conditions. The framework introduces a Multi-scale Spatial-Channel Transformer Encoder (MS-SCTE) to address low-light image noise and event temporal characteristics. Additionally, an Adaptive Feature Fusion Module (AFFM) is proposed to dynamically aggregate features from both image and event streams, enhancing generalization performance. The framework demonstrates superior performance on the SDE, LIE and RELED datasets by enhancing noise reduction and detail preservation.
在低照度条件下,RGB相机拍摄到的图像噪声大、能见度低,而事件相机由于具有高动态范围的特性,通过补充低照度条件下的图像细节来提高低照度环境下的图像质量,是一种很有前景的解决方案。在本文中,我们提出了一种新的图像增强框架AFFLIE,它集成了基于事件和帧的技术来提高低光条件下的图像质量。该框架引入了一个多尺度空间通道变压器编码器(MS-SCTE)来解决低光图像噪声和事件时间特性。此外,提出了一种自适应特征融合模块(AFFM),对图像流和事件流的特征进行动态聚合,提高了泛化性能。该框架通过增强降噪和细节保存功能,在SDE、LIE和RELED数据集上表现出优异的性能。
{"title":"AFFLIE: Adaptive feature fusion for low-light image enhancement","authors":"Yaxin Lin ,&nbsp;Xiaopeng Li ,&nbsp;Lian Zou ,&nbsp;Liqing Zhou ,&nbsp;Cien Fan","doi":"10.1016/j.displa.2026.103340","DOIUrl":"10.1016/j.displa.2026.103340","url":null,"abstract":"<div><div>Under low illumination, RGB cameras often capture images with significant noise and low visibility, while event cameras, with their high dynamic range characteristic, emerge as a promising solution for improving image quality in the low-light environment by supplementing image details in low-light condition. In this paper, we propose a novel image enhancement framework called AFFLIE, which integrates event and frame-based techniques to improve image quality in low-light conditions. The framework introduces a Multi-scale Spatial-Channel Transformer Encoder (MS-SCTE) to address low-light image noise and event temporal characteristics. Additionally, an Adaptive Feature Fusion Module (AFFM) is proposed to dynamically aggregate features from both image and event streams, enhancing generalization performance. The framework demonstrates superior performance on the SDE, LIE and RELED datasets by enhancing noise reduction and detail preservation.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103340"},"PeriodicalIF":3.4,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145938455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Class extension logits distillation for few-shot object detection 类扩展logits精馏为少数射击目标检测
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2026-01-02 DOI: 10.1016/j.displa.2026.103338
Taijin Zhao, Heqian Qiu, Lanxiao Wang, Yu Dai, Qingbo Wu, Hongliang Li
Few-Shot Object Detection (FSOD) aims at learning robust detectors under extreme data imbalance between abundant base classes and scarce novel classes. While recent transfer learning paradigms achieve initial success through sequential base class pre-training and novel class fine-tuning, their fundamental assumption that base class trained feature encoder can generalize to novel class instances reveals critical limitations due to the information suppression of novel classes. Knowledge distillation from vision-language models like CLIP presents promising solutions, yet conventional distillation approaches exhibit inherent flaws from the perspective of Information Bottleneck (IB) principle: CLIP’s broad semantic understanding results in low information compression, and feature distillation can struggle to reconcile with FSOD’s high information compression demand, potentially leading to suboptimal information compression of the detector. Conversely, while logits distillation using only base classes can enhance information compression, it fails to preserve and transfer crucial novel class semantics from CLIP. To address these challenges, we propose a unified framework comprising Class Extension Logits Distillation (CELD) and Virtual Knowledge Parameter Initializer (VKPInit). During base training, CELD uses CLIP’s text encoder to create an expanded base-novel classifier. This acts as an IB, providing target distributions from CLIP’s visual features for both base and unseen novel classes. The detector aligns to these distributions using its base classifier and a virtual novel classifier, allowing it to learn compressed, novel-aware knowledge from CLIP. Subsequently, during novel tuning, VKPInit leverages the virtual novel classifier learned in CELD to provide semantically-informed initializations for the novel class heads, mitigating initialization bias and enhancing resistance to overfitting. Extensive experiments on PASCAL VOC and MS COCO demonstrate the robustness and superiority of our proposed method over multiple baselines.
少射目标检测(FSOD)的目标是在大量基类和稀缺新类之间极度数据不平衡的情况下学习鲁棒检测器。虽然最近的迁移学习范式通过顺序基类预训练和新类微调取得了初步成功,但它们的基本假设——基类训练的特征编码器可以推广到新类实例——由于新类的信息抑制,暴露了严重的局限性。来自视觉语言模型(如CLIP)的知识蒸馏提供了有前途的解决方案,但从信息瓶颈(IB)原理的角度来看,传统的蒸馏方法存在固有缺陷:CLIP的广泛语义理解导致信息压缩低,特征蒸馏可能难以与FSOD的高信息压缩需求相协调,可能导致检测器的信息压缩次优。相反,虽然仅使用基类的logits蒸馏可以增强信息压缩,但它无法从CLIP中保存和传输关键的新类语义。为了解决这些挑战,我们提出了一个由类扩展逻辑蒸馏(CELD)和虚拟知识参数初始化器(VKPInit)组成的统一框架。在基础训练期间,CELD使用CLIP的文本编码器来创建扩展的基础新颖分类器。它充当IB,为基本类和不可见的新类提供来自CLIP视觉特征的目标分布。检测器使用其基本分类器和虚拟小说分类器来对齐这些分布,从而允许它从CLIP学习压缩的、小说感知的知识。随后,在新调优期间,VKPInit利用在CELD中学习到的虚拟新分类器为新类头提供语义知情的初始化,减轻初始化偏差并增强对过拟合的抵抗力。在PASCAL VOC和MS COCO上的大量实验证明了我们提出的方法在多个基线上的鲁棒性和优越性。
{"title":"Class extension logits distillation for few-shot object detection","authors":"Taijin Zhao,&nbsp;Heqian Qiu,&nbsp;Lanxiao Wang,&nbsp;Yu Dai,&nbsp;Qingbo Wu,&nbsp;Hongliang Li","doi":"10.1016/j.displa.2026.103338","DOIUrl":"10.1016/j.displa.2026.103338","url":null,"abstract":"<div><div>Few-Shot Object Detection (FSOD) aims at learning robust detectors under extreme data imbalance between abundant base classes and scarce novel classes. While recent transfer learning paradigms achieve initial success through sequential base class pre-training and novel class fine-tuning, their fundamental assumption that base class trained feature encoder can generalize to novel class instances reveals critical limitations due to the information suppression of novel classes. Knowledge distillation from vision-language models like CLIP presents promising solutions, yet conventional distillation approaches exhibit inherent flaws from the perspective of Information Bottleneck (IB) principle: CLIP’s broad semantic understanding results in low information compression, and feature distillation can struggle to reconcile with FSOD’s high information compression demand, potentially leading to suboptimal information compression of the detector. Conversely, while logits distillation using only base classes can enhance information compression, it fails to preserve and transfer crucial novel class semantics from CLIP. To address these challenges, we propose a unified framework comprising Class Extension Logits Distillation (CELD) and Virtual Knowledge Parameter Initializer (VKPInit). During base training, CELD uses CLIP’s text encoder to create an expanded base-novel classifier. This acts as an IB, providing target distributions from CLIP’s visual features for both base and unseen novel classes. The detector aligns to these distributions using its base classifier and a virtual novel classifier, allowing it to learn compressed, novel-aware knowledge from CLIP. Subsequently, during novel tuning, VKPInit leverages the virtual novel classifier learned in CELD to provide semantically-informed initializations for the novel class heads, mitigating initialization bias and enhancing resistance to overfitting. Extensive experiments on PASCAL VOC and MS COCO demonstrate the robustness and superiority of our proposed method over multiple baselines.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103338"},"PeriodicalIF":3.4,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A polynomial regression-based calibration method for enhancing chromaticity and luminance accuracy at low luminance levels of LCDs with automated sampling and compensation mechanisms 一种基于多项式回归的校正方法,利用自动采样和补偿机制提高lcd在低亮度水平下的色度和亮度精度
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-30 DOI: 10.1016/j.displa.2025.103336
Yi-Ming Li , Wen Meng , Chih-Yu Tsai , Tsung-Xian Lee
This study presents a calibration methodology designed to enhance the chromaticity and luminance accuracy of LCD monitors under low-luminance conditions, specifically targeting cost-effective medical display applications. The proposed system integrates a low-cost color sensor with a polynomial regression-based model, enhanced by automated sampling and low-luminance compensation techniques. Compared to conventional calibration workflows, the proposed system reduces the number of required samples by more than 50% while achieving comparable or superior accuracy, particularly under low-luminance conditions. This is enabled by a novel combination of luminance-aware automated sampling and perceptually guided compensation mechanisms. The automated sampling strategy significantly reduces the number of required calibration samples from 96 to 44 while maintaining high calibration accuracy, achieving an average luminance error (ΔL) of 0.606% and a color difference (ΔE) of 0.091. The low-luminance compensation algorithm mitigates accuracy degradation in darker regions, ensuring compliance with stringent medical-grade performance standards. These results demonstrate that high-precision calibration can be achieved using economical color sensors, offering a practical and scalable solution for medical-grade LCDs.
本研究提出了一种校准方法,旨在提高LCD显示器在低亮度条件下的色度和亮度精度,特别是针对具有成本效益的医疗显示应用。该系统将低成本的颜色传感器与基于多项式回归的模型集成在一起,并通过自动采样和低亮度补偿技术进行增强。与传统的校准工作流程相比,所提出的系统将所需样品的数量减少了50%以上,同时实现了相当或更高的精度,特别是在低亮度条件下。这是由亮度感知自动采样和感知引导补偿机制的新颖组合实现的。自动采样策略将所需的校准样本数量从96个减少到44个,同时保持较高的校准精度,平均亮度误差(ΔL)为0.606%,色差(ΔE)为0.091。低亮度补偿算法减轻了较暗区域的精度下降,确保符合严格的医疗级性能标准。这些结果表明,使用经济型颜色传感器可以实现高精度校准,为医疗级lcd提供了实用且可扩展的解决方案。
{"title":"A polynomial regression-based calibration method for enhancing chromaticity and luminance accuracy at low luminance levels of LCDs with automated sampling and compensation mechanisms","authors":"Yi-Ming Li ,&nbsp;Wen Meng ,&nbsp;Chih-Yu Tsai ,&nbsp;Tsung-Xian Lee","doi":"10.1016/j.displa.2025.103336","DOIUrl":"10.1016/j.displa.2025.103336","url":null,"abstract":"<div><div>This study presents a calibration methodology designed to enhance the chromaticity and luminance accuracy of LCD monitors under low-luminance conditions, specifically targeting cost-effective medical display applications. The proposed system integrates a low-cost color sensor with a polynomial regression-based model, enhanced by automated sampling and low-luminance compensation techniques. Compared to conventional calibration workflows, the proposed system reduces the number of required samples by more than 50% while achieving comparable or superior accuracy, particularly under low-luminance conditions. This is enabled by a novel combination of luminance-aware automated sampling and perceptually guided compensation mechanisms. The automated sampling strategy significantly reduces the number of required calibration samples from 96 to 44 while maintaining high calibration accuracy, achieving an average luminance error (Δ<em>L</em>) of 0.606% and a color difference (Δ<em>E</em>) of 0.091. The low-luminance compensation algorithm mitigates accuracy degradation in darker regions, ensuring compliance with stringent medical-grade performance standards. These results demonstrate that high-precision calibration can be achieved using economical color sensors, offering a practical and scalable solution for medical-grade LCDs.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103336"},"PeriodicalIF":3.4,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145938454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Thermal memory of chair: Physics-guided generalization for sitting posture inference using thermal imaging 椅子的热记忆:利用热成像进行坐姿推断的物理指导推广
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-30 DOI: 10.1016/j.displa.2025.103331
Jin Ai , Gan Pei , Bitao Ma , Menghan Hu , Jian Zhang
Thermal imaging offers a viable approach for contactless posture monitoring due to its privacy-preserving nature and ability to capture residual thermal patterns. Existing methods exhibit limited generalization capabilities across different materials and thermal decay stages, coupled with a lack of reliable physical interpretability. To address these challenges, this study proposes an integrated paradigm combining generative data augmentation, visual transformer classification, and finite element (FE) simulation. The proposed pipeline first enhances data diversity through a generative model, then employs a Transformer-based classifier to achieve accurate recognition of 9 sitting postures. Finally, a heat conduction model is constructed to simulate the real thermal decay temperature field, decoding the influence of material and time on buttock thermal patterns. Through this paradigm, we identify a critical temperature difference threshold of 2.6 ± 0.06 K, beyond which model performance significantly degrades. Systematic analysis demonstrates that maintaining surface temperatures above this threshold during the initial 30 s enables the model to sustain accuracy above 85%. Furthermore, we quantified the direct impact of material thermophysical parameters on the effective detection window, revealing that materials with lower thermal conductivity (e.g., plastics) extend reliable identification duration. Validation on an independent test set featuring two materials and varying decay durations demonstrated a classification accuracy of 0.9162. This study establishes a thermal imaging-based posture analysis paradigm, providing a theoretical foundation and practical solutions for real-world applications in privacy-sensitive scenarios by decoding buttock thermal patterns. The dataset and code supporting this study are publicly available at: https://github.com/AJ-1995/Thermal-Memory-of-Chair.
热成像由于其隐私保护特性和捕获残余热模式的能力,为非接触式姿势监测提供了一种可行的方法。现有方法在不同材料和热衰变阶段的泛化能力有限,并且缺乏可靠的物理可解释性。为了应对这些挑战,本研究提出了一种结合生成数据增强、可视化变压器分类和有限元(FE)模拟的集成范式。该管道首先通过生成模型增强数据多样性,然后采用基于transformer的分类器实现对9种坐姿的准确识别。最后,建立了一个热传导模型来模拟真实的热衰减温度场,解码了材料和时间对臀部热模式的影响。通过这种模式,我们确定了2.6±0.06 K的临界温差阈值,超过该阈值,模型性能将显著下降。系统分析表明,在最初的30秒内保持地表温度高于该阈值可以使模型保持85%以上的精度。此外,我们量化了材料热物性参数对有效检测窗口的直接影响,揭示了导热系数较低的材料(如塑料)延长了可靠的识别持续时间。在具有两种材料和不同衰变持续时间的独立测试集上进行验证,分类准确率为0.9162。本研究建立了基于热成像的姿态分析范式,通过解码臀部热模式,为隐私敏感场景下的实际应用提供理论基础和实践解决方案。支持这项研究的数据集和代码可在https://github.com/AJ-1995/Thermal-Memory-of-Chair上公开获取。
{"title":"Thermal memory of chair: Physics-guided generalization for sitting posture inference using thermal imaging","authors":"Jin Ai ,&nbsp;Gan Pei ,&nbsp;Bitao Ma ,&nbsp;Menghan Hu ,&nbsp;Jian Zhang","doi":"10.1016/j.displa.2025.103331","DOIUrl":"10.1016/j.displa.2025.103331","url":null,"abstract":"<div><div>Thermal imaging offers a viable approach for contactless posture monitoring due to its privacy-preserving nature and ability to capture residual thermal patterns. Existing methods exhibit limited generalization capabilities across different materials and thermal decay stages, coupled with a lack of reliable physical interpretability. To address these challenges, this study proposes an integrated paradigm combining generative data augmentation, visual transformer classification, and finite element (FE) simulation. The proposed pipeline first enhances data diversity through a generative model, then employs a Transformer-based classifier to achieve accurate recognition of 9 sitting postures. Finally, a heat conduction model is constructed to simulate the real thermal decay temperature field, decoding the influence of material and time on buttock thermal patterns. Through this paradigm, we identify a critical temperature difference threshold of 2.6 <span><math><mo>±</mo></math></span> 0.06 K, beyond which model performance significantly degrades. Systematic analysis demonstrates that maintaining surface temperatures above this threshold during the initial 30 s enables the model to sustain accuracy above 85%. Furthermore, we quantified the direct impact of material thermophysical parameters on the effective detection window, revealing that materials with lower thermal conductivity (e.g., plastics) extend reliable identification duration. Validation on an independent test set featuring two materials and varying decay durations demonstrated a classification accuracy of 0.9162. This study establishes a thermal imaging-based posture analysis paradigm, providing a theoretical foundation and practical solutions for real-world applications in privacy-sensitive scenarios by decoding buttock thermal patterns. The dataset and code supporting this study are publicly available at: <span><span>https://github.com/AJ-1995/Thermal-Memory-of-Chair</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103331"},"PeriodicalIF":3.4,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145938458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantum-enhanced gold rush Optimizer for multi-threshold segmentation of lupus nephritis pathological images 狼疮性肾炎病理图像多阈值分割的量子增强淘金优化算法
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-28 DOI: 10.1016/j.displa.2025.103334
Mingyang Yu , Haotian Lu , Donglin Wang , Ji Du , Desheng Kong , Xiaoxuan Xu , Jing Xu
Lupus Nephritis (LN), a severe complication of Systemic Lupus Erythematosus (SLE), critically affects renal function. To improve diagnostic accuracy, multi-threshold image segmentation (MTIS) techniques based on metaheuristic (MH) algorithms are widely adopted. However, traditional MH algorithms often suffer from premature convergence, limiting their global search capabilities. This study proposes a Quantum-Enhanced Hybrid Gold Rush Optimizer (QHGRO) that integrates quantum computing to enhance optimization performance. QHGRO is applied to an MTIS framework that utilizes a non-local means two-dimensional histogram to encode image information and employs Rényi entropy as the fitness function. The optimizer incorporates a Quantum Computing-Driven Adaptive Variation strategy, where quantum superposition enables parallel exploration of multiple states, and quantum mutation introduces controlled randomness to enhance global search and avoid local optima. To further improve performance, QHGRO includes a Stochastic Lévy Flight strategy during the Collaboration between Prospectors phase to enhance exploration and population diversity, and a Dynamic Fitness Distance Balance strategy during the Gold Mining phase to improve convergence accuracy. Experimental results on CEC2017 and CEC2022 benchmark functions demonstrate that QHGRO achieves competitive performance, often approaching global optima. In two engineering design problems—Speed Reducer Design and Three-Bar Truss Design—QHGRO outperforms classical algorithms (PSO, GWO, DE), newer algorithms (NRBO, CPO, RUN, BKA, SBOA, GRO), and advanced variants (MPSO, IAGWO). In LN pathological image segmentation tasks, the proposed method generates clear, high-quality segmented images, offering valuable support for clinical diagnosis.
狼疮肾炎(LN)是系统性红斑狼疮(SLE)的严重并发症,严重影响肾功能。为了提高诊断准确率,基于元启发式(MH)算法的多阈值图像分割(MTIS)技术被广泛采用。然而,传统的MH算法往往存在过早收敛的问题,限制了其全局搜索能力。本研究提出一种量子增强型混合淘金热优化器(QHGRO),该优化器集成了量子计算以提高优化性能。将QHGRO应用于MTIS框架,该框架利用非局部均值二维直方图对图像信息进行编码,并采用rsamnyi熵作为适应度函数。优化器采用量子计算驱动的自适应变化策略,其中量子叠加允许并行探索多个状态,量子突变引入可控随机性以增强全局搜索并避免局部最优。为了进一步提高QHGRO的性能,在勘探者之间的协作阶段引入了随机lsamvy飞行策略以增强勘探和种群多样性,在金矿开采阶段引入了动态适应度距离平衡策略以提高收敛精度。在CEC2017和CEC2022基准函数上的实验结果表明,QHGRO实现了具有竞争力的性能,通常接近全局最优。在减速器设计和三杆桁架设计这两个工程设计问题上,qhgro优于经典算法(PSO、GWO、DE)、新算法(NRBO、CPO、RUN、BKA、spoa、GRO)和高级算法(MPSO、IAGWO)。在LN病理图像分割任务中,该方法可生成清晰、高质量的分割图像,为临床诊断提供有价值的支持。
{"title":"Quantum-enhanced gold rush Optimizer for multi-threshold segmentation of lupus nephritis pathological images","authors":"Mingyang Yu ,&nbsp;Haotian Lu ,&nbsp;Donglin Wang ,&nbsp;Ji Du ,&nbsp;Desheng Kong ,&nbsp;Xiaoxuan Xu ,&nbsp;Jing Xu","doi":"10.1016/j.displa.2025.103334","DOIUrl":"10.1016/j.displa.2025.103334","url":null,"abstract":"<div><div>Lupus Nephritis (LN), a severe complication of Systemic Lupus Erythematosus (SLE), critically affects renal function. To improve diagnostic accuracy, multi-threshold image segmentation (MTIS) techniques based on metaheuristic (MH) algorithms are widely adopted. However, traditional MH algorithms often suffer from premature convergence, limiting their global search capabilities. This study proposes a Quantum-Enhanced Hybrid Gold Rush Optimizer (QHGRO) that integrates quantum computing to enhance optimization performance. QHGRO is applied to an MTIS framework that utilizes a non-local means two-dimensional histogram to encode image information and employs Rényi entropy as the fitness function. The optimizer incorporates a Quantum Computing-Driven Adaptive Variation strategy, where quantum superposition enables parallel exploration of multiple states, and quantum mutation introduces controlled randomness to enhance global search and avoid local optima. To further improve performance, QHGRO includes a Stochastic Lévy Flight strategy during the Collaboration between Prospectors phase to enhance exploration and population diversity, and a Dynamic Fitness Distance Balance strategy during the Gold Mining phase to improve convergence accuracy. Experimental results on CEC2017 and CEC2022 benchmark functions demonstrate that QHGRO achieves competitive performance, often approaching global optima. In two engineering design problems—Speed Reducer Design and Three-Bar Truss Design—QHGRO outperforms classical algorithms (PSO, GWO, DE), newer algorithms (NRBO, CPO, RUN, BKA, SBOA, GRO), and advanced variants (MPSO, IAGWO). In LN pathological image segmentation tasks, the proposed method generates clear, high-quality segmented images, offering valuable support for clinical diagnosis.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103334"},"PeriodicalIF":3.4,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145883334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Luminance evaluation and control method for glass curtain wall LED media facade displays based on human visual perception 基于人眼视觉感知的玻璃幕墙LED媒体立面显示亮度评价与控制方法
IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2025-12-28 DOI: 10.1016/j.displa.2025.103335
Bo Wang , Yuan Chen , Ayin Yan , Kepan Xu , Wenhao Bao , Yu Luo , Wenqing Xie , Xinshuo Zhang , Ying He
The proliferation of glass curtain wall LED media facade displays (G-LMDs) is transforming urban night environments but also introducing significant visual discomfort to observers and contributing to light pollution through glare, sky glow, and light intrusion. These impacts arise from the outward-facing luminous light sources mounted on building facades, which generate high luminance and strong luminance contrast at night. Existing building facade luminance standards, which were formulated for floodlighting, cannot evaluate and guide this new typology. This study proposes a novel “point-line-surface” luminance evaluation method that integrates the lighting characteristics of G-LMDs with human luminance perception properties. We quantified the human perceptual impact of G-LMDs by conducting luminance tests on their point and line sources from an observer’s perspective and converting the results into equivalent surface-source luminance. A key finding is that comfortable surface luminance is less influenced by array spacing and type, demonstrating high stability, which supports its use as a reliable metric for evaluating and controlling G-LMDs luminance. Based on this stability and its variability with ambient luminance, this study proposes G-LMDs luminance control values of 70 cd/m2, 65 cd/m2, and 50 cd/m2 for high, medium, and low ambient luminance, respectively. Furthermore, we invert the evaluation method into a practical “surface-line-point” design strategy to translate perceptual luminance targets into actionable lighting parameters, offering specific recommendations for different ambient luminance conditions. The proposed evaluation and control method and design strategy offer practical guidance for the design of urban G-LMDs and presents a viable strategy for mitigating urban light pollution and supporting landscape management.
玻璃幕墙LED媒体立面显示器(g - lmd)的扩散正在改变城市的夜间环境,但也给观察者带来了严重的视觉不适,并通过眩光、天空辉光和光侵入造成光污染。这些影响来自于安装在建筑立面上的朝外发光光源,这些光源在夜间产生高亮度和强烈的亮度对比。现有的建筑立面亮度标准,是为泛光灯制定的,不能评估和指导这种新的类型。本研究提出了一种新颖的“点-线-面”亮度评估方法,该方法将g - lmd的照明特性与人类的亮度感知特性相结合。我们从观察者的角度对g - lmd的点光源和线光源进行亮度测试,并将结果转换为等效的表面光源亮度,从而量化了g - lmd对人类感知的影响。一个关键的发现是,舒适的表面亮度受阵列间距和类型的影响较小,显示出高稳定性,这支持它作为评估和控制g - lmd亮度的可靠度量。基于这种稳定性及其随环境亮度的变化,本研究提出了高、中、低环境亮度下G-LMDs的亮度控制值分别为70 cd/m2、65 cd/m2和50 cd/m2。此外,我们将评估方法转化为实用的“面-线-点”设计策略,将感知亮度目标转化为可操作的照明参数,为不同的环境亮度条件提供具体建议。本文提出的评价控制方法和设计策略为城市光污染的缓解和景观管理提供了切实可行的策略。
{"title":"Luminance evaluation and control method for glass curtain wall LED media facade displays based on human visual perception","authors":"Bo Wang ,&nbsp;Yuan Chen ,&nbsp;Ayin Yan ,&nbsp;Kepan Xu ,&nbsp;Wenhao Bao ,&nbsp;Yu Luo ,&nbsp;Wenqing Xie ,&nbsp;Xinshuo Zhang ,&nbsp;Ying He","doi":"10.1016/j.displa.2025.103335","DOIUrl":"10.1016/j.displa.2025.103335","url":null,"abstract":"<div><div>The proliferation of glass curtain wall LED media facade displays (G-LMDs) is transforming urban night environments but also introducing significant visual discomfort to observers and contributing to light pollution through glare, sky glow, and light intrusion. These impacts arise from the outward-facing luminous light sources mounted on building facades, which generate high luminance and strong luminance contrast at night. Existing building facade luminance standards, which were formulated for floodlighting, cannot evaluate and guide this new typology. This study proposes a novel “point-line-surface” luminance evaluation method that integrates the lighting characteristics of G-LMDs with human luminance perception properties. We quantified the human perceptual impact of G-LMDs by conducting luminance tests on their point and line sources from an observer’s perspective and converting the results into equivalent surface-source luminance. A key finding is that comfortable surface luminance is less influenced by array spacing and type, demonstrating high stability, which supports its use as a reliable metric for evaluating and controlling G-LMDs luminance. Based on this stability and its variability with ambient luminance, this study proposes G-LMDs luminance control values of 70 cd/m<sup>2</sup>, 65 cd/m<sup>2</sup>, and 50 cd/m<sup>2</sup> for high, medium, and low ambient luminance, respectively. Furthermore, we invert the evaluation method into a practical “surface-line-point” design strategy to translate perceptual luminance targets into actionable lighting parameters, offering specific recommendations for different ambient luminance conditions. The proposed evaluation and control method and design strategy offer practical guidance for the design of urban G-LMDs and presents a viable strategy for mitigating urban light pollution and supporting landscape management.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"92 ","pages":"Article 103335"},"PeriodicalIF":3.4,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145883335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1