首页 > 最新文献

Displays最新文献

英文 中文
Generative adversarial networks with deep blind degradation powered terahertz ptychography 具有深度盲降功能的生成式对抗网络驱动太赫兹层析成像技术
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-21 DOI: 10.1016/j.displa.2024.102815
Ziwei Ming , Defeng Liu , Long Xiao , Siyu Tu , Peng Chen , Yingshan Ma , Jinsong Liu , Zhengang Yang , Kejia Wang

Ptychography is an imaging technique that uses the redundancy of information generated by the overlapping of adjacent light regions to calculate the relative phase of adjacent regions and reconstruct the image. In the terahertz domain, in order to make the ptychography technology better serve engineering applications, we propose a set of deep learning terahertz ptychography system that is easier to realize in engineering and plays an outstanding role. To address this issue, we propose to use a powerful deep blind degradation model which uses isotropic and anisotropic Gaussian kernels for random blurring, chooses the downsampling modes from nearest interpolation, bilinear interpolation, bicubic interpolation and down-up-sampling method, and introduces Gaussian noise, JPEG compression noise, and processed detector noise. Additionally, a random shuffle strategy is used to further expand the degradation space of the image. Using paired low/high resolution images generated by the deep blind degradation model, we trained a multi-layer residual network with residual scaling parameters and dense connection structure to achieve the neural network super-resolution of terahertz ptychography for the first time. We use two representative neural networks, SwinIR and RealESRGAN, to compare with our model. Experimental result shows that the proposed method achieved better accuracy and visual improvement than other terahertz ptychographic image super-resolution algorithms. Further quantitative calculation proved that our method has significant advantages in terahertz ptychographic image super-resolution, achieving a resolution of 33.09 dB on the peak signal-to-noise ratio (PSNR) index and 3.05 on the naturalness image quality estimator (NIQE) index. This efficient and engineered approach fills the gap in the improvement of terahertz ptychography by using neural networks.

相位差成像技术是一种利用相邻光区重叠产生的冗余信息来计算相邻光区的相对相位并重建图像的成像技术。在太赫兹领域,为了让平差成像技术更好地服务于工程应用,我们提出了一套更容易在工程中实现且作用突出的深度学习太赫兹平差成像系统。针对这一问题,我们提出使用强大的深度盲退化模型,该模型使用各向同性和各向异性高斯核进行随机模糊,从最近插值法、双线性插值法、双三次插值法和下上采样法中选择下采样模式,并引入高斯噪声、JPEG 压缩噪声和处理后的检测器噪声。此外,还使用了随机洗牌策略来进一步扩大图像的降解空间。利用深度盲退化模型生成的成对低/高分辨率图像,我们训练了一个具有残差缩放参数和密集连接结构的多层残差网络,首次实现了太赫兹拼接图像的神经网络超分辨率。我们使用两个具有代表性的神经网络 SwinIR 和 RealESRGAN 与我们的模型进行比较。实验结果表明,与其他太赫兹拼接图像超分辨算法相比,我们提出的方法获得了更好的精度和视觉效果。进一步的定量计算证明,我们的方法在太赫兹梯形图像超分辨方面具有显著优势,在峰值信噪比(PSNR)指标上实现了 33.09 dB 的分辨率,在自然度图像质量估计器(NIQE)指标上实现了 3.05 的分辨率。这种高效的工程化方法填补了利用神经网络改进太赫兹层析成像技术的空白。
{"title":"Generative adversarial networks with deep blind degradation powered terahertz ptychography","authors":"Ziwei Ming ,&nbsp;Defeng Liu ,&nbsp;Long Xiao ,&nbsp;Siyu Tu ,&nbsp;Peng Chen ,&nbsp;Yingshan Ma ,&nbsp;Jinsong Liu ,&nbsp;Zhengang Yang ,&nbsp;Kejia Wang","doi":"10.1016/j.displa.2024.102815","DOIUrl":"10.1016/j.displa.2024.102815","url":null,"abstract":"<div><p>Ptychography is an imaging technique that uses the redundancy of information generated by the overlapping of adjacent light regions to calculate the relative phase of adjacent regions and reconstruct the image. In the terahertz domain, in order to make the ptychography technology better serve engineering applications, we propose a set of deep learning terahertz ptychography system that is easier to realize in engineering and plays an outstanding role. To address this issue, we propose to use a powerful deep blind degradation model which uses isotropic and anisotropic Gaussian kernels for random blurring, chooses the downsampling modes from nearest interpolation, bilinear interpolation, bicubic interpolation and down-up-sampling method, and introduces Gaussian noise, JPEG compression noise, and processed detector noise. Additionally, a random shuffle strategy is used to further expand the degradation space of the image. Using paired low/high resolution images generated by the deep blind degradation model, we trained a multi-layer residual network with residual scaling parameters and dense connection structure to achieve the neural network super-resolution of terahertz ptychography for the first time. We use two representative neural networks, SwinIR and RealESRGAN, to compare with our model. Experimental result shows that the proposed method achieved better accuracy and visual improvement than other terahertz ptychographic image super-resolution algorithms. Further quantitative calculation proved that our method has significant advantages in terahertz ptychographic image super-resolution, achieving a resolution of 33.09 dB on the peak signal-to-noise ratio (PSNR) index and 3.05 on the naturalness image quality estimator (NIQE) index. This efficient and engineered approach fills the gap in the improvement of terahertz ptychography by using neural networks.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102815"},"PeriodicalIF":3.7,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142040041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatial–angular–epipolar transformer for light field spatial and angular super-resolution 用于光场空间和角度超分辨率的空间-方位-极性变压器
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-20 DOI: 10.1016/j.displa.2024.102816
Sizhe Wang , Hao Sheng , Rongshan Chen , Da Yang , Zhenglong Cui , Ruixuan Cong , Zhang Xiong

Transformer-based light field (LF) super-resolution (SR) methods have recently achieved significant performance improvements due to global feature modeling by self-attention mechanisms. However, as a method designed for natural language processing, 4D LFs are reshaped into 1D sequences with an immense set of tokens, which results in a quadratic computational complexity cost. In this paper, a spatial–angular–epipolar swin transformer (SAEST) is proposed for spatial and angular SR (SASR), which sufficiently extracts SR information in the spatial, angular, and epipolar domains using local self-attention with shifted windows. Specifically, in SAEST, a spatial swin transformer and an angular standard transformer are firstly cascaded to extract spatial and angular SR features, separately. Then, the extracted SR feature is reshaped into the epipolar plane image pattern and fed into an epipolar swin transformer to extract the spatial–angular correlation information. Finally, several SAEST blocks are cascaded in a Unet framework to extract multi-scale SR features for SASR. Experiment results indicate that SAEST is a fast transformer-based SASR method with less running time and GPU consumption and has outstanding performance on simulated and real-world public datasets.

基于变压器的光场(LF)超分辨率(SR)方法通过自我注意机制进行全局特征建模,最近取得了显著的性能提升。然而,作为一种专为自然语言处理而设计的方法,4D 光场被重塑为具有大量标记集的 1D 序列,这导致了二次计算复杂度成本。本文提出了一种用于空间和角度 SR(SASR)的空间-角度-外极性斯温变换器(SAEST),该变换器利用带有移位窗口的局部自注意,充分提取了空间、角度和外极性域中的 SR 信息。具体来说,在 SAEST 中,首先级联空间斯温变换器和角度标准变换器,分别提取空间和角度 SR 特征。然后,将提取的 SR 特征重塑为外极平面图像模式,并输入外极swin 变换器以提取空间-角度相关信息。最后,在 Unet 框架中级联多个 SAEST 模块,为 SASR 提取多尺度 SR 特征。实验结果表明,SAEST 是一种基于变换器的快速 SASR 方法,运行时间和 GPU 消耗较少,在模拟和真实世界公共数据集上表现出色。
{"title":"Spatial–angular–epipolar transformer for light field spatial and angular super-resolution","authors":"Sizhe Wang ,&nbsp;Hao Sheng ,&nbsp;Rongshan Chen ,&nbsp;Da Yang ,&nbsp;Zhenglong Cui ,&nbsp;Ruixuan Cong ,&nbsp;Zhang Xiong","doi":"10.1016/j.displa.2024.102816","DOIUrl":"10.1016/j.displa.2024.102816","url":null,"abstract":"<div><p>Transformer-based light field (LF) super-resolution (SR) methods have recently achieved significant performance improvements due to global feature modeling by self-attention mechanisms. However, as a method designed for natural language processing, 4D LFs are reshaped into 1D sequences with an immense set of tokens, which results in a quadratic computational complexity cost. In this paper, a spatial–angular–epipolar swin transformer (SAEST) is proposed for spatial and angular SR (SASR), which sufficiently extracts SR information in the spatial, angular, and epipolar domains using local self-attention with shifted windows. Specifically, in SAEST, a spatial swin transformer and an angular standard transformer are firstly cascaded to extract spatial and angular SR features, separately. Then, the extracted SR feature is reshaped into the epipolar plane image pattern and fed into an epipolar swin transformer to extract the spatial–angular correlation information. Finally, several SAEST blocks are cascaded in a Unet framework to extract multi-scale SR features for SASR. Experiment results indicate that SAEST is a fast transformer-based SASR method with less running time and GPU consumption and has outstanding performance on simulated and real-world public datasets.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102816"},"PeriodicalIF":3.7,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142148483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TS-BEV: BEV object detection algorithm based on temporal-spatial feature fusion TS-BEV:基于时空特征融合的 BEV 物体检测算法
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-19 DOI: 10.1016/j.displa.2024.102814
Xinlong Dong , Peicheng Shi , Heng Qi , Aixi Yang , Taonian Liang

In order to accurately identify occluding targets and infer the motion state of objects, we propose a Bird’s-Eye View Object Detection Network based on Temporal-Spatial feature fusion (TS-BEV), which replaces the previous multi-frame sampling method by using the cyclic propagation mode of historical frame instance information. We design a new Temporal-Spatial feature fusion attention module, which fully integrates temporal information and spatial features, and improves the inference and training speed. In response to realize multi-frame feature fusion across multiple scales and views, we propose an efficient Temporal-Spatial deformable aggregation module, which performs feature sampling and weighted summation from multiple feature maps of historical frames and current frames, and makes full use of the parallel computing capabilities of GPUs and AI chips to further improve efficiency. Furthermore, in order to solve the lack of global inference in the context of temporal-spatial fusion BEV features and the inability of instance features distributed in different locations to fully interact, we further design the BEV self-attention mechanism module to perform global operation of features, enhance global inference ability and fully interact with instance features. We have carried out extensive experimental experiments on the challenging BEV object detection nuScenes dataset, quantitative results show that our method achieves excellent performance of 61.5% mAP and 68.5% NDS in camera-only 3D object detection tasks, and qualitative results show that TS-BEV can effectively solve the problem of 3D object detection in complex traffic background with lack of light at night, with good robustness and scalability.

为了准确识别遮挡目标并推断物体的运动状态,我们提出了一种基于时空特征融合的鸟瞰物体检测网络(TS-BEV),它利用历史帧实例信息的循环传播模式取代了以往的多帧采样方法。我们设计了一种新的时空特征融合注意模块,充分整合了时间信息和空间特征,提高了推理和训练速度。为实现跨尺度、跨视角的多帧特征融合,我们提出了高效的时空可变形聚合模块,对历史帧和当前帧的多个特征图进行特征采样和加权求和,并充分利用 GPU 和 AI 芯片的并行计算能力,进一步提高了效率。此外,为了解决时空融合 BEV 特征缺乏全局推理、分布在不同位置的实例特征无法充分交互的问题,我们进一步设计了 BEV 自关注机制模块,对特征进行全局运算,增强全局推理能力,并与实例特征充分交互。我们在具有挑战性的 BEV 物体检测 nuScenes 数据集上进行了大量实验,定量结果表明,我们的方法在仅摄像头的三维物体检测任务中取得了 61.5% mAP 和 68.5% NDS 的优异性能;定性结果表明,TS-BEV 能有效解决夜间光线不足的复杂交通背景下的三维物体检测问题,并具有良好的鲁棒性和可扩展性。
{"title":"TS-BEV: BEV object detection algorithm based on temporal-spatial feature fusion","authors":"Xinlong Dong ,&nbsp;Peicheng Shi ,&nbsp;Heng Qi ,&nbsp;Aixi Yang ,&nbsp;Taonian Liang","doi":"10.1016/j.displa.2024.102814","DOIUrl":"10.1016/j.displa.2024.102814","url":null,"abstract":"<div><p>In order to accurately identify occluding targets and infer the motion state of objects, we propose a Bird’s-Eye View Object Detection Network based on Temporal-Spatial feature fusion (TS-BEV), which replaces the previous multi-frame sampling method by using the cyclic propagation mode of historical frame instance information. We design a new Temporal-Spatial feature fusion attention module, which fully integrates temporal information and spatial features, and improves the inference and training speed. In response to realize multi-frame feature fusion across multiple scales and views, we propose an efficient Temporal-Spatial deformable aggregation module, which performs feature sampling and weighted summation from multiple feature maps of historical frames and current frames, and makes full use of the parallel computing capabilities of GPUs and AI chips to further improve efficiency. Furthermore, in order to solve the lack of global inference in the context of temporal-spatial fusion BEV features and the inability of instance features distributed in different locations to fully interact, we further design the BEV self-attention mechanism module to perform global operation of features, enhance global inference ability and fully interact with instance features. We have carried out extensive experimental experiments on the challenging BEV object detection nuScenes dataset, quantitative results show that our method achieves excellent performance of 61.5% mAP and 68.5% NDS in camera-only 3D object detection tasks, and qualitative results show that TS-BEV can effectively solve the problem of 3D object detection in complex traffic background with lack of light at night, with good robustness and scalability.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102814"},"PeriodicalIF":3.7,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142040451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Skeuomorphic or flat? The effects of icon style on visual search and recognition performance 天马行空还是平面化?图标风格对视觉搜索和识别性能的影响
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-17 DOI: 10.1016/j.displa.2024.102813
Zhangfan Shen, Tiantian Chen, Yi Wang, Moke Li, Jiaxiang Chen, Zhanpeng Hu

Although there have been many previous studies on icon visual search and recognition performance, only a few have considered the effects of both the internal and external characteristics of icons. In this behavioral study, we employed a visual search task and a semantic recognition task to explore the effects of icon style, semantic distance (SD), and task difficulty on users’ performance in perceiving and identifying icons. First, we created and filtered 64 new icons, which were divided into four different groups (flat design & close SD, flat design & far SD, skeuomorphic design & close SD, skeuomorphic design & far SD) through expert evaluation. A total of 40 participants (13 men and 27 women, ages ranging from 19 to 25 years, mean age = 21.9 years, SD=1.93) were asked to perform an icon visual search task and an icon recognition task after a round of learning. Participants’ accuracy and response time were measured as a function of the following independent variables: two icon styles (flat or skeuomorphic style), two levels of SD (close or far), and two levels of task difficulty (easy or difficult). The results showed that flat icons had better visual search performance than skeuomorphic icons; this beneficial effect increased as the task difficulty increased. However, in the icon recognition task, participants’ performance in recalling skeuomorphic icons was significantly better than that in recalling flat icons. Furthermore, a strong interaction effect between icon style and task difficulty was observed for response time. As the task difficulty decreased, the difference in recognition performance between these two different icon styles increased significantly. These findings provide valuable guidance for the design of icons in human–computer interaction interfaces.

尽管以前有很多关于图标视觉搜索和识别性能的研究,但只有少数研究考虑了图标内部和外部特征的影响。在这项行为研究中,我们采用了视觉搜索任务和语义识别任务来探讨图标风格、语义距离(SD)和任务难度对用户感知和识别图标表现的影响。首先,我们创建并筛选了64个新图标,并通过专家评估将其分为四组(扁平设计& close SD、扁平设计& far SD、skeuomorphic design & close SD、skeuomorphic design & far SD)。共有 40 名参与者(13 名男性和 27 名女性,年龄在 19 至 25 岁之间,平均年龄 = 21.9 岁,SD=1.93)被要求在一轮学习后完成图标视觉搜索任务和图标识别任务。参与者的准确率和反应时间是由以下自变量决定的:两种图标风格(扁平或偏斜风格)、两种标距水平(近或远)和两种任务难度(易或难)。结果表明,扁平图标的视觉搜索性能优于斜体图标;随着任务难度的增加,这种有利影响也在增加。然而,在图标识别任务中,参与者回忆斜体图标的表现明显优于回忆扁平图标的表现。此外,在反应时间方面,图标风格与任务难度之间存在强烈的交互效应。随着任务难度的降低,这两种不同图标风格之间的识别成绩差异明显增大。这些发现为人机交互界面中的图标设计提供了宝贵的指导。
{"title":"Skeuomorphic or flat? The effects of icon style on visual search and recognition performance","authors":"Zhangfan Shen,&nbsp;Tiantian Chen,&nbsp;Yi Wang,&nbsp;Moke Li,&nbsp;Jiaxiang Chen,&nbsp;Zhanpeng Hu","doi":"10.1016/j.displa.2024.102813","DOIUrl":"10.1016/j.displa.2024.102813","url":null,"abstract":"<div><p>Although there have been many previous studies on icon visual search and recognition performance, only a few have considered the effects of both the internal and external characteristics of icons. In this behavioral study, we employed a visual search task and a semantic recognition task to explore the effects of icon style, semantic distance (SD), and task difficulty on users’ performance in perceiving and identifying icons. First, we created and filtered 64 new icons, which were divided into four different groups (flat design &amp; close SD, flat design &amp; far SD, skeuomorphic design &amp; close SD, skeuomorphic design &amp; far SD) through expert evaluation. A total of 40 participants (13 men and 27 women, ages ranging from 19 to 25 years, mean age = 21.9 years, SD=1.93) were asked to perform an icon visual search task and an icon recognition task after a round of learning. Participants’ accuracy and response time were measured as a function of the following independent variables: two icon styles (flat or skeuomorphic style), two levels of SD (close or far), and two levels of task difficulty (easy or difficult). The results showed that flat icons had better visual search performance than skeuomorphic icons; this beneficial effect increased as the task difficulty increased. However, in the icon recognition task, participants’ performance in recalling skeuomorphic icons was significantly better than that in recalling flat icons. Furthermore, a strong interaction effect between icon style and task difficulty was observed for response time. As the task difficulty decreased, the difference in recognition performance between these two different icon styles increased significantly. These findings provide valuable guidance for the design of icons in human–computer interaction interfaces.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102813"},"PeriodicalIF":3.7,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142021071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive geometry editing of Neural Radiance Fields 神经辐射场的交互式几何编辑
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-13 DOI: 10.1016/j.displa.2024.102810
Shaoxu Li, Ye Pan

Neural Radiance Fields (NeRF) have recently emerged as a promising approach for synthesizing highly realistic images from 3D scenes. This technology has shown impressive results in capturing intricate details and producing photorealistic renderings. However, one of the limitations of traditional NeRF approaches is the difficulty in editing and manipulating the geometry of the scene once it has been captured. This restriction hinders creative freedom and practical applicability.

In this paper, we propose a method that enables interactive geometry editing for neural radiance fields manipulation. We use two proxy cages (inner cage and outer cage) to edit a scene. The inner cage defines the operation target, and the outer cage defines the adjustment space. Various operations apply to the two cages. After cage selection, operations on the inner cage lead to the desired transformation of the inner cage and adjustment of the outer cage. Users can edit the scene with translation, rotation, scaling, or combinations. The operations on the corners and edges of the cage are also supported. Our method does not need any explicit 3D geometry representations. The interactive geometry editing applies directly to the implicit neural radiance fields. Extensive experimental results demonstrate the effectiveness of our approach.

神经辐射场(NeRF)是最近出现的一种从三维场景合成高度逼真图像的有效方法。这项技术在捕捉复杂细节和制作逼真渲染效果方面取得了令人印象深刻的成果。然而,传统 NeRF 方法的局限之一是,一旦捕捉到场景的几何图形,就很难对其进行编辑和处理。在本文中,我们提出了一种可对神经辐射场进行交互式几何编辑的方法。我们使用两个代理笼(内笼和外笼)来编辑场景。内笼定义操作目标,外笼定义调整空间。各种操作都适用于这两个笼子。选择笼子后,对内笼子的操作将导致内笼子的预期变换和外笼子的调整。用户可以对场景进行平移、旋转、缩放或组合编辑。此外,还支持对笼子的角落和边缘进行操作。我们的方法不需要任何明确的 3D 几何图形表示。交互式几何编辑直接应用于隐式神经辐射场。大量实验结果证明了我们方法的有效性。
{"title":"Interactive geometry editing of Neural Radiance Fields","authors":"Shaoxu Li,&nbsp;Ye Pan","doi":"10.1016/j.displa.2024.102810","DOIUrl":"10.1016/j.displa.2024.102810","url":null,"abstract":"<div><p>Neural Radiance Fields (NeRF) have recently emerged as a promising approach for synthesizing highly realistic images from 3D scenes. This technology has shown impressive results in capturing intricate details and producing photorealistic renderings. However, one of the limitations of traditional NeRF approaches is the difficulty in editing and manipulating the geometry of the scene once it has been captured. This restriction hinders creative freedom and practical applicability.</p><p>In this paper, we propose a method that enables interactive geometry editing for neural radiance fields manipulation. We use two proxy cages (inner cage and outer cage) to edit a scene. The inner cage defines the operation target, and the outer cage defines the adjustment space. Various operations apply to the two cages. After cage selection, operations on the inner cage lead to the desired transformation of the inner cage and adjustment of the outer cage. Users can edit the scene with translation, rotation, scaling, or combinations. The operations on the corners and edges of the cage are also supported. Our method does not need any explicit 3D geometry representations. The interactive geometry editing applies directly to the implicit neural radiance fields. Extensive experimental results demonstrate the effectiveness of our approach.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102810"},"PeriodicalIF":3.7,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141978908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards benchmarking VR sickness: A novel methodological framework for assessing contributing factors and mitigation strategies through rapid VR sickness induction and recovery 制定 VR 病症基准:通过快速 VR 病症诱导和恢复评估诱因和缓解策略的新方法框架
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-13 DOI: 10.1016/j.displa.2024.102807
Rose Rouhani , Narmada Umatheva , Jannik Brockerhoff , Behrang Keshavarz , Ernst Kruijff , Jan Gugenheimer , Bernhard E. Riecke

Virtual Reality (VR) sickness remains a significant challenge in the widespread adoption of VR technologies. The absence of a standardized benchmark system hinders progress in understanding and effectively countering VR sickness. This paper proposes an initial step towards a benchmark system, utilizing a novel methodological framework to serve as a common platform for evaluating contributing VR sickness factors and mitigation strategies. Our benchmark, grounded in established theories and leveraging existing research, features both small and large environments. In two research studies, we validated our system by demonstrating its capability to (1) quickly, reliably, and controllably induce VR sickness in both environments, followed by a rapid decline post-stimulus, facilitating cost and time-effective within-subject studies and increased statistical power, (2) integrate and evaluate established VR sickness mitigation methods — static and dynamic field of view reduction, blur, and virtual nose — demonstrating their effectiveness in reducing symptoms in the benchmark and their direct comparison within a standardized setting. Our proposed benchmark also enables broader, more comparative research into different technical, setup, and participant variables influencing VR sickness and overall user experience, ultimately paving the way for building a comprehensive database to identify the most effective strategies for specific VR applications.

虚拟现实(VR)病仍然是广泛采用 VR 技术的一个重大挑战。标准化基准系统的缺失阻碍了在理解和有效应对 VR 病症方面取得进展。本文提出了建立基准系统的第一步,利用一个新颖的方法框架,作为评估导致 VR 病症的因素和缓解策略的通用平台。我们的基准系统以既有理论为基础,利用现有研究,同时具有小型和大型环境的特点。在两项研究中,我们验证了我们的系统,证明其有能力(1)在两种环境中快速、可靠、可控地诱发 VR 晕眩,并在刺激后迅速缓解,从而促进成本和时间效益高的受试者内研究,并提高统计能力,(2)整合和评估既定的 VR 晕眩缓解方法--静态和动态视野缩小、模糊和虚拟鼻子--证明其在基准中减少症状的有效性,并在标准化设置中进行直接比较。我们提出的基准还有助于对影响 VR 病症和整体用户体验的不同技术、设置和参与者变量进行更广泛、更具可比性的研究,最终为建立一个全面的数据库以确定针对特定 VR 应用的最有效策略铺平道路。
{"title":"Towards benchmarking VR sickness: A novel methodological framework for assessing contributing factors and mitigation strategies through rapid VR sickness induction and recovery","authors":"Rose Rouhani ,&nbsp;Narmada Umatheva ,&nbsp;Jannik Brockerhoff ,&nbsp;Behrang Keshavarz ,&nbsp;Ernst Kruijff ,&nbsp;Jan Gugenheimer ,&nbsp;Bernhard E. Riecke","doi":"10.1016/j.displa.2024.102807","DOIUrl":"10.1016/j.displa.2024.102807","url":null,"abstract":"<div><p>Virtual Reality (VR) sickness remains a significant challenge in the widespread adoption of VR technologies. The absence of a standardized benchmark system hinders progress in understanding and effectively countering VR sickness. This paper proposes an initial step towards a benchmark system, utilizing a novel methodological framework to serve as a common platform for evaluating contributing VR sickness factors and mitigation strategies. Our benchmark, grounded in established theories and leveraging existing research, features both small and large environments. In two research studies, we validated our system by demonstrating its capability to (1) quickly, reliably, and controllably induce VR sickness in both environments, followed by a rapid decline post-stimulus, facilitating cost and time-effective within-subject studies and increased statistical power, (2) integrate and evaluate established VR sickness mitigation methods — static and dynamic field of view reduction, blur, and virtual nose — demonstrating their effectiveness in reducing symptoms in the benchmark and their direct comparison within a standardized setting. Our proposed benchmark also enables broader, more comparative research into different technical, setup, and participant variables influencing VR sickness and overall user experience, ultimately paving the way for building a comprehensive database to identify the most effective strategies for specific VR applications.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102807"},"PeriodicalIF":3.7,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0141938224001719/pdfft?md5=2e64eaeb33beb05d2ed088ab7163143d&pid=1-s2.0-S0141938224001719-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A feature fusion module based on complementary attention for medical image segmentation 基于互补注意力的医学图像分割特征融合模块
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-10 DOI: 10.1016/j.displa.2024.102811
Mingyue Yang , Xiaoxuan Dong , Wang Zhang , Peng Xie , Chuan Li , Shanxiong Chen

Automated segmentation algorithms are a crucial component of medical image analysis, playing an essential role in assisting professionals to achieve accurate diagnoses. Traditional convolutional neural networks (CNNs) face challenges when dealing with complex and variable lesions: limited by the receptive field of convolutional operators, CNNs often struggle to capture long-range dependencies of complex lesions. The transformer’s outstanding ability to capture long-range dependencies offers a new perspective on addressing these challenges. Inspired by this, our research aims to combine the precise spatial detail extraction capabilities of CNNs with the global semantic understanding abilities of transformers. Unlike traditional fusion methods, we propose a fine-grained feature fusion strategy based on complementary attention, deeply exploring and complementarily fusing the feature representations of the encoder. Moreover, considering that merely relying on feature fusion might overlook critical texture details and key edge features in the segmentation task, we designed a feature enhancement module based on information entropy. This module emphasizes shallow texture features and edge information, enabling the model to more accurately capture and enhance multi-level details of the image, further optimizing segmentation results. Our method was tested on multiple public segmentation datasets of polyps and skin lesions,and performed better than state-of-the-art methods. Extensive qualitative experimental results indicate that our method maintains robust performance even when faced with challenging cases of narrow or blurry-boundary lesions.

自动分割算法是医学图像分析的重要组成部分,在帮助专业人员实现准确诊断方面发挥着至关重要的作用。传统的卷积神经网络(CNN)在处理复杂多变的病变时面临挑战:受限于卷积算子的感受野,CNN 通常难以捕捉复杂病变的长程依赖关系。变换器捕捉长程依赖关系的出色能力为应对这些挑战提供了新的视角。受此启发,我们的研究旨在将 CNN 的精确空间细节提取能力与变换器的全局语义理解能力相结合。与传统的融合方法不同,我们提出了一种基于互补关注的细粒度特征融合策略,深入探索并互补融合编码器的特征表征。此外,考虑到仅仅依靠特征融合可能会忽略分割任务中的关键纹理细节和关键边缘特征,我们设计了一个基于信息熵的特征增强模块。该模块强调浅层纹理特征和边缘信息,使模型能够更准确地捕捉和增强图像的多层次细节,进一步优化分割结果。我们的方法在多个公开的息肉和皮肤病变分割数据集上进行了测试,其表现优于最先进的方法。广泛的定性实验结果表明,即使面对病变边界狭窄或模糊的挑战情况,我们的方法也能保持稳健的性能。
{"title":"A feature fusion module based on complementary attention for medical image segmentation","authors":"Mingyue Yang ,&nbsp;Xiaoxuan Dong ,&nbsp;Wang Zhang ,&nbsp;Peng Xie ,&nbsp;Chuan Li ,&nbsp;Shanxiong Chen","doi":"10.1016/j.displa.2024.102811","DOIUrl":"10.1016/j.displa.2024.102811","url":null,"abstract":"<div><p>Automated segmentation algorithms are a crucial component of medical image analysis, playing an essential role in assisting professionals to achieve accurate diagnoses. Traditional convolutional neural networks (CNNs) face challenges when dealing with complex and variable lesions: limited by the receptive field of convolutional operators, CNNs often struggle to capture long-range dependencies of complex lesions. The transformer’s outstanding ability to capture long-range dependencies offers a new perspective on addressing these challenges. Inspired by this, our research aims to combine the precise spatial detail extraction capabilities of CNNs with the global semantic understanding abilities of transformers. Unlike traditional fusion methods, we propose a fine-grained feature fusion strategy based on complementary attention, deeply exploring and complementarily fusing the feature representations of the encoder. Moreover, considering that merely relying on feature fusion might overlook critical texture details and key edge features in the segmentation task, we designed a feature enhancement module based on information entropy. This module emphasizes shallow texture features and edge information, enabling the model to more accurately capture and enhance multi-level details of the image, further optimizing segmentation results. Our method was tested on multiple public segmentation datasets of polyps and skin lesions,and performed better than state-of-the-art methods. Extensive qualitative experimental results indicate that our method maintains robust performance even when faced with challenging cases of narrow or blurry-boundary lesions.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102811"},"PeriodicalIF":3.7,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation and application strategy of low blue light mode of desktop display based on brightness characteristics 基于亮度特性的台式显示器低蓝光模式评估与应用策略
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-10 DOI: 10.1016/j.displa.2024.102809
Wenqian Xu , Peiyu Wu , Qi Yao , Rongjun Zhang , Bang Qin , Dong Wang , Shenfei Chen , Yedong Shen

Long-term use of desktop displays may increase the burden of the visual system and users can use low blue light mode for eye protection in terms of circadian effect. In this work, we investigated its influence from two aspects of brightness-visual effect, namely efficacy and circadian effect, and color quality, namely color difference Δu’v’ (chromaticity coordinate offset of two colors), and Duv (deviation from blackbody locus). The decrease of brightness is accompanied by the increase of efficacy while diminishing circadian effect. The blue, cyan, and magenta have the largest Δu’v’, and the lower the saturation, the greater the Δu’v’. The lower the correlated color temperature (CCT), the greater the Duv and the farther it deviates from the Planckian locus. We summarize three low blue light mode adjustment strategies based on red, green, and blue three-channel ratio of spectrum, and propose an optimized mode using genetic algorithm, which has two optional CCT ranges of 3500–5000 K and 2700–3000 K. Furthermore, we establish the relationship between brightness and gamut coverage to refine the screen brightness range for low blue light mode. This research provides valuable insights into low blue light mode application and their implications for human-centric healthy displays.

长期使用台式显示器可能会增加视觉系统的负担,从昼夜节律效应的角度考虑,用户可以使用低蓝光模式来保护眼睛。在这项工作中,我们从亮度的两个方面--视觉效果(即功效和昼夜节律效应)和色彩质量(即色差Δu'v'(两种颜色的色度坐标偏移)和 Duv(偏离黑体位置))--研究了其影响。亮度的降低伴随着功效的提高,同时昼夜效应也会减弱。蓝色、青色和品红色的Δu'v'最大,饱和度越低,Δu'v'越大。相关色温(CCT)越低,Duv 越大,偏离普朗克位置越远。我们总结了基于红、绿、蓝三通道光谱比例的三种低蓝光模式调整策略,并利用遗传算法提出了一种优化模式,该模式有 3500-5000 K 和 2700-3000 K 两种可选色温范围。这项研究为低蓝光模式的应用及其对以人为本的健康显示器的影响提供了宝贵的见解。
{"title":"Evaluation and application strategy of low blue light mode of desktop display based on brightness characteristics","authors":"Wenqian Xu ,&nbsp;Peiyu Wu ,&nbsp;Qi Yao ,&nbsp;Rongjun Zhang ,&nbsp;Bang Qin ,&nbsp;Dong Wang ,&nbsp;Shenfei Chen ,&nbsp;Yedong Shen","doi":"10.1016/j.displa.2024.102809","DOIUrl":"10.1016/j.displa.2024.102809","url":null,"abstract":"<div><p>Long-term use of desktop displays may increase the burden of the visual system and users can use low blue light mode for eye protection in terms of circadian effect. In this work, we investigated its influence from two aspects of brightness-visual effect, namely efficacy and circadian effect, and color quality, namely color difference Δu’v’ (chromaticity coordinate offset of two colors), and D<sub>uv</sub> (deviation from blackbody locus). The decrease of brightness is accompanied by the increase of efficacy while diminishing circadian effect. The blue, cyan, and magenta have the largest Δu’v’, and the lower the saturation, the greater the Δu’v’. The lower the correlated color temperature (CCT), the greater the D<sub>uv</sub> and the farther it deviates from the Planckian locus. We summarize three low blue light mode adjustment strategies based on red, green, and blue three-channel ratio of spectrum, and propose an optimized mode using genetic algorithm, which has two optional CCT ranges of 3500–5000 K and 2700–3000 K. Furthermore, we establish the relationship between brightness and gamut coverage to refine the screen brightness range for low blue light mode. This research provides valuable insights into low blue light mode application and their implications for human-centric healthy displays.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102809"},"PeriodicalIF":3.7,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142040450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human pose estimation in complex background videos via Transformer-based multi-scale feature integration 通过基于变换器的多尺度特征集成,在复杂背景视频中进行人体姿态估计
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-08 DOI: 10.1016/j.displa.2024.102805
Chen Cheng, Huahu Xu

Human posture estimation is still a hot research topic. Previous algorithms based on traditional machine learning have difficulties in feature extraction and low fusion efficiency. To address these problems, we proposed a Transformer-based method. We combined three techniques, namely the Transformer-based feature extraction module, the multi-scale feature fusion module, and the occlusion processing mechanism, to capture the human pose. The Transformer-based feature extraction module uses the self-attention mechanism to extract key features from the input sequence, the multi-scale feature fusion module fuses feature information of different scales to enhance the perception ability of the model, and the occlusion processing mechanism can effectively handle occlusion in the data and effectively remove background interference. Our method has shown excellent performance through verification on the standard dataset Human3.6M and the wild video dataset, achieving accurate pose prediction in both complex actions and challenging samples.

人体姿态估计仍是一个热门研究课题。以往基于传统机器学习的算法存在特征提取困难、融合效率低等问题。针对这些问题,我们提出了一种基于变换器的方法。我们结合了三种技术,即基于变换器的特征提取模块、多尺度特征融合模块和遮挡处理机制,来捕捉人体姿态。基于变换器的特征提取模块利用自注意机制从输入序列中提取关键特征,多尺度特征融合模块融合不同尺度的特征信息以增强模型的感知能力,而遮挡处理机制能有效处理数据中的遮挡并有效去除背景干扰。通过在标准数据集 Human3.6M 和野生视频数据集上的验证,我们的方法表现出了卓越的性能,在复杂动作和高难度样本中都能实现准确的姿势预测。
{"title":"Human pose estimation in complex background videos via Transformer-based multi-scale feature integration","authors":"Chen Cheng,&nbsp;Huahu Xu","doi":"10.1016/j.displa.2024.102805","DOIUrl":"10.1016/j.displa.2024.102805","url":null,"abstract":"<div><p>Human posture estimation is still a hot research topic. Previous algorithms based on traditional machine learning have difficulties in feature extraction and low fusion efficiency. To address these problems, we proposed a Transformer-based method. We combined three techniques, namely the Transformer-based feature extraction module, the multi-scale feature fusion module, and the occlusion processing mechanism, to capture the human pose. The Transformer-based feature extraction module uses the self-attention mechanism to extract key features from the input sequence, the multi-scale feature fusion module fuses feature information of different scales to enhance the perception ability of the model, and the occlusion processing mechanism can effectively handle occlusion in the data and effectively remove background interference. Our method has shown excellent performance through verification on the standard dataset Human3.6M and the wild video dataset, achieving accurate pose prediction in both complex actions and challenging samples.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102805"},"PeriodicalIF":3.7,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of low-temperature polycrystalline silicon process and novel 2T2C driving circuits for electric paper 开发用于电纸的低温多晶硅工艺和新型 2T2C 驱动电路
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-08 DOI: 10.1016/j.displa.2024.102808
Yu Jin , Ying Shen , Wen-Jie Xu , Wen-Zhi Fan , Lei Xu , Xiao-Yu Gao , Yong Wu , Zhi-Yi Zhou , Wei-Jie Gu , Dong-Liang Yu , Jian-Qiu Sun , Li-Juan Ke , Wei-Bin Zhang , Wei-Qi Xu , Feng-Ying Xu

In this work, we systematically investigate low-temperature polycrystalline silicon (LTPS)-based driving circuits of electronic paper for the aim of adopting small width/length ratio (W/L) of LTPS-based thin film transistors (TFTs) to reduce switch error and thus improve image sticking. Firstly, LTPS-TFTs with extremely low off-state leakage current (IOFF) even at a large source-drain voltage (VDS) of 30 V were obtained through detailed explorations of LTPS process technology. Meanwhile, the high on-state current (ION) of LTPS-TFTs also meet the requirements of fast signal writing to the storage capacitor due to their extremely high field-effect mobility (approximately 100 cm2/V⋅s), making it possible to fabricate TFTs with relatively small W/L, thereby minimizing switch error. The ID-VD test results reveal that the produced LTPS-TFTs can effectively withstand the maximum voltage difference of 30 V during product operation. Subsequently, the optimal W/L of the LTPS-TFT was determined through experimental results. Then, reliability test was conducted on the obtained LTPS-TFTs, revealing that the threshold voltage (VTH) of the LTPS-TFTs shifted by 0.08 V after 7200 s under negative bias temperature stress (NBTS), and only by 0.19 V under positive bias temperature stress (PBTS). The aging test results of the aforementioned LTPS-TFTs exhibits a new physical phenomenon, that is, the IOFF of the LTPS-TFTs has a strict matching characteristic with the aging direction. Next, we proposed a novel 2T2C driving circuit for the e-paper, which can effectively avoid the adverse effects of IOFF on the frame holding period, and plotted it into an array layout. Finally, we combined the optimal fabricating process of the LTPS-TFTs with the 2T2C driving circuit design scheme to produce an e-paper with outstanding image sticking performance.

在这项工作中,我们系统地研究了基于低温多晶硅(LTPS)的电子纸驱动电路,目的是采用基于 LTPS 的薄膜晶体管(TFT)的小宽度/长度比(W/L),以减少开关误差,从而改善图像粘性。首先,通过对 LTPS 工艺技术的详细探索,即使在 30 V 的大源漏极电压 (VDS) 下,LTPS-TFT 也能获得极低的离态漏电流 (IOFF)。同时,由于 LTPS-TFT 具有极高的场效应迁移率(约 100 cm2/V⋅s),其高导通电流(ION)也能满足向存储电容器快速写入信号的要求,因此可以制造出相对较小 W/L 的 TFT,从而将开关误差降至最低。ID-VD 测试结果表明,所生产的 LTPS-TFT 在产品运行期间可有效承受 30 V 的最大电压差。随后,通过实验结果确定了 LTPS-TFT 的最佳 W/L。然后,对所制备的 LTPS-TFT 进行了可靠性测试,结果表明在负偏压温度应力(NBTS)作用下,LTPS-TFT 的阈值电压(VTH)在 7200 秒后偏移了 0.08 V,而在正偏压温度应力(PBTS)作用下仅偏移了 0.19 V。上述 LTPS-TFT 的老化测试结果表明了一种新的物理现象,即 LTPS-TFT 的 IOFF 与老化方向具有严格的匹配特性。接着,我们提出了一种新型的 2T2C 电子纸驱动电路,它能有效避免 IOFF 对帧保持期的不利影响,并将其绘制成阵列布局图。最后,我们将 LTPS-TFT 的最佳制造工艺与 2T2C 驱动电路设计方案相结合,制造出了具有出色图像保持性能的电子纸。
{"title":"Development of low-temperature polycrystalline silicon process and novel 2T2C driving circuits for electric paper","authors":"Yu Jin ,&nbsp;Ying Shen ,&nbsp;Wen-Jie Xu ,&nbsp;Wen-Zhi Fan ,&nbsp;Lei Xu ,&nbsp;Xiao-Yu Gao ,&nbsp;Yong Wu ,&nbsp;Zhi-Yi Zhou ,&nbsp;Wei-Jie Gu ,&nbsp;Dong-Liang Yu ,&nbsp;Jian-Qiu Sun ,&nbsp;Li-Juan Ke ,&nbsp;Wei-Bin Zhang ,&nbsp;Wei-Qi Xu ,&nbsp;Feng-Ying Xu","doi":"10.1016/j.displa.2024.102808","DOIUrl":"10.1016/j.displa.2024.102808","url":null,"abstract":"<div><p>In this work, we systematically investigate low-temperature polycrystalline silicon (LTPS)-based driving circuits of electronic paper for the aim of adopting small width/length ratio (W/L) of LTPS-based thin film transistors (TFTs) to reduce switch error and thus improve image sticking. Firstly, LTPS-TFTs with extremely low off-state leakage current (I<sub>OFF</sub>) even at a large source-drain voltage (V<sub>DS</sub>) of 30 V were obtained through detailed explorations of LTPS process technology. Meanwhile, the high on-state current (I<sub>ON</sub>) of LTPS-TFTs also meet the requirements of fast signal writing to the storage capacitor due to their extremely high field-effect mobility (approximately 100 cm<sup>2</sup>/V⋅s), making it possible to fabricate TFTs with relatively small W/L, thereby minimizing switch error. The I<sub>D</sub>-V<sub>D</sub> test results reveal that the produced LTPS-TFTs can effectively withstand the maximum voltage difference of 30 V during product operation. Subsequently, the optimal W/L of the LTPS-TFT was determined through experimental results. Then, reliability test was conducted on the obtained LTPS-TFTs, revealing that the threshold voltage (V<sub>TH</sub>) of the LTPS-TFTs shifted by 0.08 V after 7200 s under negative bias temperature stress (NBTS), and only by 0.19 V under positive bias temperature stress (PBTS). The aging test results of the aforementioned LTPS-TFTs exhibits a new physical phenomenon, that is, the I<sub>OFF</sub> of the LTPS-TFTs has a strict matching characteristic with the aging direction. Next, we proposed a novel 2T2C driving circuit for the e-paper, which can effectively avoid the adverse effects of I<sub>OFF</sub> on the frame holding period, and plotted it into an array layout. Finally, we combined the optimal fabricating process of the LTPS-TFTs with the 2T2C driving circuit design scheme to produce an e-paper with outstanding image sticking performance.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102808"},"PeriodicalIF":3.7,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141978907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1