首页 > 最新文献

Journal of the Society for Information Display最新文献

英文 中文
Visual perception of distance in 3D-augmented reality head-up displays 三维增强现实平视显示器中的距离视觉感知
IF 1.7 4区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-08-11 DOI: 10.1002/jsid.2000
Tae Hee Lee, Young Ju Jeong

A head-up display (HUD) is conveniently used to provide information on the display without changing the user's gaze. 3D HUDs, capable of projecting three-dimensional information beyond the 2D HUDs, enable the projection of augmented reality (AR) objects into the real world. Research on the perception of 3D AR HUDs is crucial for their efficient utilization and secure commercialization. In this study, we examined whether a 3D HUD is more comfortable than a 2D HUD in the context of viewing real-world environments and augmented reality objects together. Additionally, we analyzed participants' perception of distance and fatigue for AR objects at varying distances from the 3D HUD. The study found that using a 3D HUD in an AR environment resulted in less fatigue than using a 2D HUD, as determined by a Mann–Whitney statistical analysis. Participants were able to match the depth of AR objects in a 3D HUD within a range of 3 to 50 m, with similar diopter and parallax angular distance errors, regardless of the distance of the AR object. Visual fatigue increases with increasing distance from the virtual-image plane and can be modeled as a quadratic function in the domain of diopter and parallax angles.

平视显示器(HUD)可在不改变用户视线的情况下方便地在显示器上提供信息。三维 HUD 能够投射二维 HUD 以外的三维信息,将增强现实(AR)对象投射到现实世界中。对三维 AR HUD 感知的研究对其有效利用和安全商业化至关重要。在本研究中,我们研究了在同时观看真实世界环境和增强现实对象时,三维 HUD 是否比二维 HUD 更舒适。此外,我们还分析了参与者对距离 3D HUD 不同距离的 AR 物体的距离感和疲劳感。研究发现,与使用 2D HUD 相比,在 AR 环境中使用 3D HUD 的疲劳感更低,这是由 Mann-Whitney 统计分析确定的。参与者能够在 3 至 50 米的范围内匹配 3D HUD 中 AR 物体的深度,屈光度和视差角距离误差相似,与 AR 物体的距离无关。视觉疲劳会随着与虚拟图像平面距离的增加而增加,并可模拟为屈光度和视差角域内的二次函数。
{"title":"Visual perception of distance in 3D-augmented reality head-up displays","authors":"Tae Hee Lee,&nbsp;Young Ju Jeong","doi":"10.1002/jsid.2000","DOIUrl":"https://doi.org/10.1002/jsid.2000","url":null,"abstract":"<p>A head-up display (HUD) is conveniently used to provide information on the display without changing the user's gaze. 3D HUDs, capable of projecting three-dimensional information beyond the 2D HUDs, enable the projection of augmented reality (AR) objects into the real world. Research on the perception of 3D AR HUDs is crucial for their efficient utilization and secure commercialization. In this study, we examined whether a 3D HUD is more comfortable than a 2D HUD in the context of viewing real-world environments and augmented reality objects together. Additionally, we analyzed participants' perception of distance and fatigue for AR objects at varying distances from the 3D HUD. The study found that using a 3D HUD in an AR environment resulted in less fatigue than using a 2D HUD, as determined by a Mann–Whitney statistical analysis. Participants were able to match the depth of AR objects in a 3D HUD within a range of 3 to 50 m, with similar diopter and parallax angular distance errors, regardless of the distance of the AR object. Visual fatigue increases with increasing distance from the virtual-image plane and can be modeled as a quadratic function in the domain of diopter and parallax angles.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141980321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual and augmented reality: Human sensory-perceptual requirements and trends for immersive spatial computing experiences 虚拟现实和增强现实:身临其境的空间计算体验对人类感官的要求和趋势
IF 1.7 4区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-08-11 DOI: 10.1002/jsid.2001
Achintya K. Bhowmik

Building on several decades of research and development, the recent progress in virtual reality (VR) and augmented reality (AR) devices with spatial computing technologies marks a significant leap in human–computer interaction, with applications ranging from entertainment and education to e-commerce and healthcare. Advances in these technologies promise immersive experiences by simulating and augmenting the real world with computer-generated digital content. The core objective of the VR and AR systems is to create convincing human sensory perceptions, thereby creating immersive and interactive experiences that bridge the gap between virtual and physical realities. However, achieving true immersion remains a goal, and it necessitates a comprehensive understanding of the neuroscience of human multisensory perception and accurate technical implementations to create a consistency between natural and synthetic sensory cues. This paper reviews the human sensory-perceptual requirements vital for achieving such immersion, examines the current status and challenges, and discusses potential future advancements.

在数十年研究和开发的基础上,虚拟现实(VR)和增强现实(AR)设备与空间计算技术的最新进展标志着人机交互领域的重大飞跃,其应用范围从娱乐和教育到电子商务和医疗保健。通过用计算机生成的数字内容模拟和增强现实世界,这些技术的进步带来了身临其境的体验。虚拟现实和增强现实系统的核心目标是创造出令人信服的人类感知,从而创造出身临其境的互动体验,弥合虚拟现实与物理现实之间的差距。然而,实现真正的沉浸感仍然是一个目标,它需要全面了解人类多感官感知的神经科学和精确的技术实现,以创建自然和合成感官线索之间的一致性。本文回顾了对实现这种沉浸感至关重要的人类感官知觉要求,审视了当前的现状和挑战,并讨论了未来可能取得的进展。
{"title":"Virtual and augmented reality: Human sensory-perceptual requirements and trends for immersive spatial computing experiences","authors":"Achintya K. Bhowmik","doi":"10.1002/jsid.2001","DOIUrl":"https://doi.org/10.1002/jsid.2001","url":null,"abstract":"<p>Building on several decades of research and development, the recent progress in virtual reality (VR) and augmented reality (AR) devices with spatial computing technologies marks a significant leap in human–computer interaction, with applications ranging from entertainment and education to e-commerce and healthcare. Advances in these technologies promise immersive experiences by simulating and augmenting the real world with computer-generated digital content. The core objective of the VR and AR systems is to create convincing human sensory perceptions, thereby creating immersive and interactive experiences that bridge the gap between virtual and physical realities. However, achieving true immersion remains a goal, and it necessitates a comprehensive understanding of the neuroscience of human multisensory perception and accurate technical implementations to create a consistency between natural and synthetic sensory cues. This paper reviews the human sensory-perceptual requirements vital for achieving such immersion, examines the current status and challenges, and discusses potential future advancements.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141980345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-parameter fusion driver fatigue detection method based on facial fatigue features 基于面部疲劳特征的多参数融合驾驶员疲劳检测方法
IF 1.7 4区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-24 DOI: 10.1002/jsid.1343
Xuejing Du, Chengyin Yu, Tianyi Sun

Fatigued driving is one of the main causes of traffic accidents. In order to improve the detection speed of fatigue driving recognition, this paper proposes a driver fatigue detection method based on multi-parameter fusion of facial features. It uses a cascaded Adaboost object classifier to detect faces in video streams. The DliB library is employed for facial key point detection, which locates the driver's eyes and mouth to determine their states. The eye aspect ratio (EAR) is calculated to detect eye closure, and the mouth aspect ratio (MAR) is calculated to detect yawning frequency and count. The detected percentage of eye closure (PERCLOS) value is combined with yawning frequency and count, and a multi-feature fusion approach is used for fatigue detection. Experimental results show that the accuracy of blink detection is 91% and the accuracy of yawn detection is 96.43%. Furthermore, compared to the models mentioned in the comparative experiments, this model achieves two to four times faster detection times while maintaining accuracy.

疲劳驾驶是导致交通事故的主要原因之一。为了提高疲劳驾驶识别的检测速度,本文提出了一种基于面部特征多参数融合的驾驶员疲劳检测方法。该方法使用级联 Adaboost 对象分类器来检测视频流中的人脸。DliB 库被用于面部关键点检测,它可以定位驾驶员的眼睛和嘴巴以确定其状态。通过计算眼部长宽比(EAR)来检测闭眼情况,通过计算嘴部长宽比(MAR)来检测打哈欠的频率和次数。检测到的闭眼百分比(PERCLOS)值与打哈欠频率和次数相结合,采用多特征融合方法进行疲劳检测。实验结果表明,眨眼检测的准确率为 91%,打哈欠检测的准确率为 96.43%。此外,与对比实验中提到的模型相比,该模型在保持准确性的同时,检测时间快了两到四倍。
{"title":"Multi-parameter fusion driver fatigue detection method based on facial fatigue features","authors":"Xuejing Du,&nbsp;Chengyin Yu,&nbsp;Tianyi Sun","doi":"10.1002/jsid.1343","DOIUrl":"10.1002/jsid.1343","url":null,"abstract":"<p>Fatigued driving is one of the main causes of traffic accidents. In order to improve the detection speed of fatigue driving recognition, this paper proposes a driver fatigue detection method based on multi-parameter fusion of facial features. It uses a cascaded Adaboost object classifier to detect faces in video streams. The DliB library is employed for facial key point detection, which locates the driver's eyes and mouth to determine their states. The eye aspect ratio (EAR) is calculated to detect eye closure, and the mouth aspect ratio (MAR) is calculated to detect yawning frequency and count. The detected percentage of eye closure (PERCLOS) value is combined with yawning frequency and count, and a multi-feature fusion approach is used for fatigue detection. Experimental results show that the accuracy of blink detection is 91% and the accuracy of yawn detection is 96.43%. Furthermore, compared to the models mentioned in the comparative experiments, this model achieves two to four times faster detection times while maintaining accuracy.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141810202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth perception optimization of mixed reality simulation systems based on multiple-cue fusion 基于多线索融合的混合现实模拟系统深度感知优化
IF 1.7 4区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-21 DOI: 10.1002/jsid.1341
Wei Wang, Tong Chen, Haiping Liu, Jiali Zhang, Qingli Wang, Qinsheng Jiang

Mixed reality technology can be applied to simulation training, improve surgical performance, enhance 3D game experience, and so on, attracting extensive attention of researchers. The perception of the user about head-mounted display MR using HoloLens is critical, especially for precision applications such as virtual hoisting. Designing and adding appropriate depth cues in MR scenes is an effective way to improve users' depth perception. In this study, taking virtual hoisting training system as an example, the depth perception strategy of multi-cue fusion is proposed to improve the perception effect. Based on the mechanism of human depth perception, five kinds of depth cues are designed. The depth perception effect of adding single cue is studied by perceptual matching experiment. Based on the principle of fuzzy clustering, a multiple-cue comprehensive depth optimization strategy on viewing distance scale is proposed. Finally, the perceptual matching results demonstrate the effectiveness of the multi-cue fusion strategy, and the average error is reduced by 20.68% compared with the single-cue strategy, which can significantly improve the spatial depth perception. This research can provide a reference for improving users' depth perception in interactive MR simulation systems.

混合现实技术可应用于模拟训练、提高手术性能、增强 3D 游戏体验等,受到研究人员的广泛关注。用户对使用 HoloLens 的头戴式显示 MR 的感知至关重要,尤其是在虚拟吊装等精密应用中。在磁共振场景中设计和添加适当的深度线索是提高用户深度知觉的有效方法。本研究以虚拟吊装训练系统为例,提出了多线索融合的深度感知策略,以提高感知效果。根据人类深度知觉的机理,设计了五种深度线索。通过感知匹配实验研究了添加单一线索的深度感知效果。基于模糊聚类原理,提出了视距尺度上的多线索综合深度优化策略。最后,感知匹配结果证明了多线索融合策略的有效性,与单线索策略相比,平均误差降低了 20.68%,可以显著改善空间深度感知。这项研究可为提高交互式磁共振模拟系统中用户的深度感知能力提供参考。
{"title":"Depth perception optimization of mixed reality simulation systems based on multiple-cue fusion","authors":"Wei Wang,&nbsp;Tong Chen,&nbsp;Haiping Liu,&nbsp;Jiali Zhang,&nbsp;Qingli Wang,&nbsp;Qinsheng Jiang","doi":"10.1002/jsid.1341","DOIUrl":"10.1002/jsid.1341","url":null,"abstract":"<p>Mixed reality technology can be applied to simulation training, improve surgical performance, enhance 3D game experience, and so on, attracting extensive attention of researchers. The perception of the user about head-mounted display MR using HoloLens is critical, especially for precision applications such as virtual hoisting. Designing and adding appropriate depth cues in MR scenes is an effective way to improve users' depth perception. In this study, taking virtual hoisting training system as an example, the depth perception strategy of multi-cue fusion is proposed to improve the perception effect. Based on the mechanism of human depth perception, five kinds of depth cues are designed. The depth perception effect of adding single cue is studied by perceptual matching experiment. Based on the principle of fuzzy clustering, a multiple-cue comprehensive depth optimization strategy on viewing distance scale is proposed. Finally, the perceptual matching results demonstrate the effectiveness of the multi-cue fusion strategy, and the average error is reduced by 20.68% compared with the single-cue strategy, which can significantly improve the spatial depth perception. This research can provide a reference for improving users' depth perception in interactive MR simulation systems.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141818130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scene‐content‐sensitive real‐time adaptive foveated rendering 场景内容敏感型实时自适应有眼图渲染
IF 1.7 4区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-14 DOI: 10.1002/jsid.1346
Chuanyu Shen, Chunyi Chen, Xiaojuan Hu
In recent years, techniques for accelerating rendering by exploiting the limitations of the human visual system have become increasingly prevalent. The foveated rendering method significantly reduces the computational requirements during rendering by reducing image quality in peripheral regions. In this paper, we propose a scene‐content‐sensitive real‐time adaptive foveated rendering method. First, we pre‐render the three‐dimensional (3D) scene at a low resolution. Then, we utilize the low‐resolution pre‐rendered image as input to extract edge, local contrast, and color features. Subsequently, we generate a screen‐space region division map based on the gaze point position. Next, we calculate the visual importance of each 16 × 16 pixel tile based on edge, local contrast, color, and screen‐space region. We then map the visual importance to the shading rate to generate a shading rate control map for the current frame. Finally, we complete the rendering of the current frame based on variable rate shading technology. Experimental results demonstrate that our method effectively enhances the visual quality of images near the foveal region while generating high quality foveal region images. Furthermore, our method can significantly improve performance compared to per‐pixel shading method and existing scene‐content‐based foveated rendering methods.
近年来,利用人类视觉系统的局限性来加速渲染的技术越来越普遍。有焦点渲染方法通过降低外围区域的图像质量,大大降低了渲染过程中的计算要求。在本文中,我们提出了一种对场景内容敏感的实时自适应焦点渲染方法。首先,我们对三维(3D)场景进行低分辨率预渲染。然后,我们利用低分辨率预渲染图像作为输入,提取边缘、局部对比度和颜色特征。随后,我们根据注视点位置生成屏幕空间区域划分图。接下来,我们根据边缘、局部对比度、颜色和屏幕空间区域计算每个 16 × 16 像素瓦片的视觉重要性。然后,我们将视觉重要性与着色率进行映射,生成当前帧的着色率控制图。最后,我们根据可变着色率技术完成当前帧的渲染。实验结果表明,我们的方法在生成高质量眼窝区域图像的同时,有效地提高了眼窝区域附近图像的视觉质量。此外,与按像素着色方法和现有的基于场景内容的有眼窝渲染方法相比,我们的方法能显著提高性能。
{"title":"Scene‐content‐sensitive real‐time adaptive foveated rendering","authors":"Chuanyu Shen, Chunyi Chen, Xiaojuan Hu","doi":"10.1002/jsid.1346","DOIUrl":"https://doi.org/10.1002/jsid.1346","url":null,"abstract":"In recent years, techniques for accelerating rendering by exploiting the limitations of the human visual system have become increasingly prevalent. The foveated rendering method significantly reduces the computational requirements during rendering by reducing image quality in peripheral regions. In this paper, we propose a scene‐content‐sensitive real‐time adaptive foveated rendering method. First, we pre‐render the three‐dimensional (3D) scene at a low resolution. Then, we utilize the low‐resolution pre‐rendered image as input to extract edge, local contrast, and color features. Subsequently, we generate a screen‐space region division map based on the gaze point position. Next, we calculate the visual importance of each 16 × 16 pixel tile based on edge, local contrast, color, and screen‐space region. We then map the visual importance to the shading rate to generate a shading rate control map for the current frame. Finally, we complete the rendering of the current frame based on variable rate shading technology. Experimental results demonstrate that our method effectively enhances the visual quality of images near the foveal region while generating high quality foveal region images. Furthermore, our method can significantly improve performance compared to per‐pixel shading method and existing scene‐content‐based foveated rendering methods.","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141649817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Metaverse-based remote support system with smooth combination of free viewpoint observation and hand gesture instruction 基于 Metaverse 的远程支持系统,将自由视角观察和手势指令流畅地结合在一起
IF 1.7 4区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-14 DOI: 10.1002/jsid.1339
Takashi Numata, Yuya Ogi, Keiichi Mitani, Kazuyuki Tajima, Yusuke Nakamura, Naohito Ikeda, Kenichi Shimada

In field operations with large machines, onsite large task spaces have been typically required. In these cases, remote skilled workers have to provide a step-by-step guide by observing the onsite situation around the large task spaces in three dimensions, instructing onsite unskilled workers on how to perform the tasks with the correct hand gestures at the right position, and switching between observation and instruction frequently during the remote support. In this study, we developed a Metaverse-based remote support system with a seamless user interface for switching between free viewpoint observation and hand gesture instruction. The proposed system enables remote skilled workers to observe the onsite field from any viewpoint, transfer hand gesture instruction to onsite workers, and seamlessly switch between free viewpoint observation and free hand gesture instruction without having to change devices. We compared the time efficiency of the proposed system and a conventional system through experiments with 28 users and found that our system improved the observation time efficiency by 65.7%, the instruction time efficiency by 27.9%, and the switching time efficiency between observation and instruction by 14.6%. These results indicate that the proposed system enables remote skilled workers to support onsite workers quickly and efficiently.

在使用大型机器的现场作业中,通常需要现场大型任务空间。在这种情况下,远程技术工人必须通过三维观察大型任务空间周围的现场情况,指导现场非技术工人如何在正确的位置用正确的手势执行任务,并在远程支持过程中频繁切换观察和指导,从而提供循序渐进的指导。在这项研究中,我们开发了一个基于 Metaverse 的远程支持系统,该系统具有无缝用户界面,可在自由视角观察和手势指导之间进行切换。该系统使远程技术工人能够从任何视角观察现场情况,向现场工人传输手势指令,并在自由视角观察和自由手势指令之间无缝切换,而无需更换设备。我们通过对 28 名用户进行实验,比较了拟议系统和传统系统的时间效率,发现我们的系统将观察时间效率提高了 65.7%,将指令时间效率提高了 27.9%,将观察和指令之间的切换时间效率提高了 14.6%。这些结果表明,建议的系统能让远程技术工人快速高效地为现场工人提供支持。
{"title":"Metaverse-based remote support system with smooth combination of free viewpoint observation and hand gesture instruction","authors":"Takashi Numata,&nbsp;Yuya Ogi,&nbsp;Keiichi Mitani,&nbsp;Kazuyuki Tajima,&nbsp;Yusuke Nakamura,&nbsp;Naohito Ikeda,&nbsp;Kenichi Shimada","doi":"10.1002/jsid.1339","DOIUrl":"10.1002/jsid.1339","url":null,"abstract":"<p>In field operations with large machines, onsite large task spaces have been typically required. In these cases, remote skilled workers have to provide a step-by-step guide by observing the onsite situation around the large task spaces in three dimensions, instructing onsite unskilled workers on how to perform the tasks with the correct hand gestures at the right position, and switching between observation and instruction frequently during the remote support. In this study, we developed a Metaverse-based remote support system with a seamless user interface for switching between free viewpoint observation and hand gesture instruction. The proposed system enables remote skilled workers to observe the onsite field from any viewpoint, transfer hand gesture instruction to onsite workers, and seamlessly switch between free viewpoint observation and free hand gesture instruction without having to change devices. We compared the time efficiency of the proposed system and a conventional system through experiments with 28 users and found that our system improved the observation time efficiency by 65.7%, the instruction time efficiency by 27.9%, and the switching time efficiency between observation and instruction by 14.6%. These results indicate that the proposed system enables remote skilled workers to support onsite workers quickly and efficiently.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jsid.1339","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141650665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VR device with high resolution, high luminance, and low power consumption using 1.50-in. organic light-emitting diode display 使用 1.50 英寸有机发光二极管显示屏的高分辨率、高亮度和低功耗 VR 设备
IF 1.7 4区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-11 DOI: 10.1002/jsid.1345
Hisao Ikeda, Ryo Hatsumi, Yuki Tamatsukuri, Shoki Miyata, Daiki Nakamura, Munehiro Kozuma, Hidetomo Kobayashi, Yasumasa Yamane, Sachiko Yamagata, Yousuke Tsukamoto, Shunpei Yamazaki

We fabricated a microdisplay with a 1.50-in. organic light-emitting diode (OLED) and a pixel density as high as 3207 ppi. An ideal display with high luminance, low power consumption, an ultrahigh aperture ratio, and a wide color gamut can be fabricated using a metal maskless lithography technology for patterning OLED layers and an oxide semiconductor large-scale integration (OSLSI)/silicon LSI backplane. We designed a virtual reality device that exhibited less ghosting and a field of view of 90° or greater by combining the microdisplay and a novel pancake lens.

我们制造出了一种 1.50 英寸有机发光二极管(OLED)微型显示器,像素密度高达 3207 ppi。利用金属无掩膜光刻技术对有机发光二极管层进行图案化,并利用氧化物半导体大规模集成(OSLSI)/硅 LSI 背板,可以制造出具有高亮度、低功耗、超高孔径比和宽色域的理想显示器。我们设计了一种虚拟现实设备,通过将微型显示器与新型薄饼透镜相结合,减少了重影现象,视场角达到或超过 90°。
{"title":"VR device with high resolution, high luminance, and low power consumption using 1.50-in. organic light-emitting diode display","authors":"Hisao Ikeda,&nbsp;Ryo Hatsumi,&nbsp;Yuki Tamatsukuri,&nbsp;Shoki Miyata,&nbsp;Daiki Nakamura,&nbsp;Munehiro Kozuma,&nbsp;Hidetomo Kobayashi,&nbsp;Yasumasa Yamane,&nbsp;Sachiko Yamagata,&nbsp;Yousuke Tsukamoto,&nbsp;Shunpei Yamazaki","doi":"10.1002/jsid.1345","DOIUrl":"10.1002/jsid.1345","url":null,"abstract":"<p>We fabricated a microdisplay with a 1.50-in. organic light-emitting diode (OLED) and a pixel density as high as 3207 ppi. An ideal display with high luminance, low power consumption, an ultrahigh aperture ratio, and a wide color gamut can be fabricated using a metal maskless lithography technology for patterning OLED layers and an oxide semiconductor large-scale integration (OSLSI)/silicon LSI backplane. We designed a virtual reality device that exhibited less ghosting and a field of view of 90° or greater by combining the microdisplay and a novel pancake lens.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jsid.1345","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141655412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital horizontal crosstalk compensation in 8K LCD displays for enhanced image quality 8K 液晶显示器中的数字水平串扰补偿功能可提高图像质量
IF 1.7 4区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-07 DOI: 10.1002/jsid.1342
Yongwoo Lee, Kiwon Choi, Hyeryoung Park, Yong Ju Kim, Kookhyun Choi, Min Jae Ko

As ultra-high-definition displays have gained popularity, mitigating the horizontal crosstalk effect in 8K LCD panels is crucial. High display resolution requires narrower signal line integration, intensifying the coupling effect. Traditional methods like Vcom feedback and increasing analog voltage drain-drain (AVDD) load are slower and less accurate, leading to increased power consumption. In response, we propose an advanced digital signal compensation method. In this study, we developed a predictive model and investigated the intricate relationships among AVDD, Vcom, and storage capacity (Cst) ripples on horizontal crosstalk. Optimizing the ripple change (∆G) by varying the compensation coefficient (a) and decay ratio (τ) significantly reduces crosstalk effects. The digital compensation method allows rapid and precise compensation without delays, reducing horizontal crosstalk in 8K LCD panels from 4% to below 0.9%. This surpasses the requirement of minimizing crosstalk to less than 2%, substantially enhancing the image quality of high-resolution displays.

随着超高清显示器的普及,减轻 8K 液晶面板的水平串扰效应至关重要。高显示分辨率要求更窄的信号线集成度,从而加剧了耦合效应。传统的方法,如 Vcom 反馈和增加模拟电压漏极-漏极 (AVDD) 负载,速度较慢,精度较低,导致功耗增加。为此,我们提出了一种先进的数字信号补偿方法。在这项研究中,我们开发了一个预测模型,并研究了 AVDD、Vcom 和存储容量 (Cst) 纹波对水平串扰的复杂关系。通过改变补偿系数(a)和衰减比(τ)来优化纹波变化(ΔG),可显著降低串扰效应。数字补偿方法可实现无延迟的快速精确补偿,将 8K 液晶面板的水平串扰从 4% 降低到 0.9% 以下。这超过了将串扰降至 2% 以下的要求,大大提高了高分辨率显示器的图像质量。
{"title":"Digital horizontal crosstalk compensation in 8K LCD displays for enhanced image quality","authors":"Yongwoo Lee,&nbsp;Kiwon Choi,&nbsp;Hyeryoung Park,&nbsp;Yong Ju Kim,&nbsp;Kookhyun Choi,&nbsp;Min Jae Ko","doi":"10.1002/jsid.1342","DOIUrl":"10.1002/jsid.1342","url":null,"abstract":"<p>As ultra-high-definition displays have gained popularity, mitigating the horizontal crosstalk effect in 8K LCD panels is crucial. High display resolution requires narrower signal line integration, intensifying the coupling effect. Traditional methods like Vcom feedback and increasing analog voltage drain-drain (AVDD) load are slower and less accurate, leading to increased power consumption. In response, we propose an advanced digital signal compensation method. In this study, we developed a predictive model and investigated the intricate relationships among AVDD, Vcom, and storage capacity (Cst) ripples on horizontal crosstalk. Optimizing the ripple change (<i>∆G</i>) by varying the compensation coefficient (<i>a</i>) and decay ratio (<i>τ</i>) significantly reduces crosstalk effects. The digital compensation method allows rapid and precise compensation without delays, reducing horizontal crosstalk in 8K LCD panels from 4% to below 0.9%. This surpasses the requirement of minimizing crosstalk to less than 2%, substantially enhancing the image quality of high-resolution displays.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141670497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wide‐viewing‐angle dual‐view integral imaging display 宽视角双视角整体成像显示屏
IF 1.7 4区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-04 DOI: 10.1002/jsid.1344
Bai‐Chuan Zhao, Wei Jia, Wei Fan, Fan Yang, Yang Fu
A wide‐viewing‐angle dual‐view integral imaging display is proposed. A micro‐lens array, a polarizer parallax barrier, and a display panel are aligned in sequence. The display panel is covered with the polarizer parallax barrier. Two types of orthogonal polarizer slits in the polarizer parallax barrier are alternately aligned. Two types of orthogonal polarizer slits polarize the lights from two types of elemental images. The micro‐lens array propagates two types of polarized lights into two primary viewing zones, which coincide at the optimal viewing distance. Different 3D images are, respectively, observed through two types of polarizer glasses. The viewing angle is enhanced and unrelated to the number of elemental images.
提出了一种广视角双视角整体成像显示器。微型透镜阵列、偏振视差屏障和显示面板依次排列。偏振视差屏障覆盖在显示面板上。偏振视差屏障上的两种正交偏振片狭缝交替排列。两类正交偏振片狭缝对来自两类元素图像的光线进行偏振。微型透镜阵列将两种偏振光传播到两个主观察区,这两个观察区在最佳观察距离上重合。通过两种偏振镜分别观察不同的三维图像。视角得到增强,且与元素图像的数量无关。
{"title":"Wide‐viewing‐angle dual‐view integral imaging display","authors":"Bai‐Chuan Zhao, Wei Jia, Wei Fan, Fan Yang, Yang Fu","doi":"10.1002/jsid.1344","DOIUrl":"https://doi.org/10.1002/jsid.1344","url":null,"abstract":"A wide‐viewing‐angle dual‐view integral imaging display is proposed. A micro‐lens array, a polarizer parallax barrier, and a display panel are aligned in sequence. The display panel is covered with the polarizer parallax barrier. Two types of orthogonal polarizer slits in the polarizer parallax barrier are alternately aligned. Two types of orthogonal polarizer slits polarize the lights from two types of elemental images. The micro‐lens array propagates two types of polarized lights into two primary viewing zones, which coincide at the optimal viewing distance. Different 3D images are, respectively, observed through two types of polarizer glasses. The viewing angle is enhanced and unrelated to the number of elemental images.","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141678658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction to “Dual ligand exchange of Cd-free quantum dots and optimal control of ink formulation for improving the performance of all-inkjet-printed qunatum dot light-emitting diodes” 更正 "无镉量子点的双配体交换和油墨配方的优化控制用于提高全油墨喷印qunatum点发光二极管的性能"
IF 1.7 4区 工程技术 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-06-27 DOI: 10.1002/jsid.1340

Ha, J, Lee, S, Park, M, Kim, H, Jung, YK, Han, C, Hwang, J, Park, G, Lee, HJ, Bae, WK, Noh, S, Kwak, D, Kim, S, Yoon, YG, Lee, C Dual ligand exchange of Cd-free quantum dots and optimal control of ink formulation for improving the performance of all-inkjet-printed qunatum dot light-emitting diodes. J Soc Inf Display. 2024; 32(5): 332340. https://doi.org/10.1002/jsid.1296

In the article title, the word “quantum” was mistakenly misspelled. The correct article title is below.

“Dual ligand exchange of Cd-free quantum dots and optimal control of ink formulation for improving the performance of all-inkjet-printed quantum dot light-emitting diodes”

We apologize for this error.

Ha,J,Lee,S,Park,M,Kim,H,Jung,YK,Han,C,Hwang,J,Park,G,Lee,HJ,Bae,WK,Noh,S,Kwak,D,Kim,S,Yoon,YG,Lee,C 无镉量子点的双配体交换和油墨配方的优化控制提高了全油墨喷印qunatum点发光二极管的性能。J Soc Inf Display. 2024; 32(5): https://doi.org/10.1002/jsid.1296In 文章标题中的 "quantum "一词拼写错误。正确的文章标题如下:"无镉量子点的双配体交换和油墨配方的优化控制提高了全墨水喷印量子点发光二极管的性能",我们对此错误深表歉意。
{"title":"Correction to “Dual ligand exchange of Cd-free quantum dots and optimal control of ink formulation for improving the performance of all-inkjet-printed qunatum dot light-emitting diodes”","authors":"","doi":"10.1002/jsid.1340","DOIUrl":"https://doi.org/10.1002/jsid.1340","url":null,"abstract":"<p>\u0000 <span>Ha, J</span>, <span>Lee, S</span>, <span>Park, M</span>, <span>Kim, H</span>, <span>Jung, YK</span>, <span>Han, C</span>, <span>Hwang, J</span>, <span>Park, G</span>, <span>Lee, HJ</span>, <span>Bae, WK</span>, <span>Noh, S</span>, <span>Kwak, D</span>, <span>Kim, S</span>, <span>Yoon, YG</span>, <span>Lee, C</span> <span>Dual ligand exchange of Cd-free quantum dots and optimal control of ink formulation for improving the performance of all-inkjet-printed qunatum dot light-emitting diodes</span>. <i>J Soc Inf Display.</i> <span>2024</span>; <span>32</span>(<span>5</span>): <span>332</span>–<span>340</span>. https://doi.org/10.1002/jsid.1296</p><p>In the article title, the word “quantum” was mistakenly misspelled. The correct article title is below.</p><p>“Dual ligand exchange of Cd-free quantum dots and optimal control of ink formulation for improving the performance of all-inkjet-printed quantum dot light-emitting diodes”</p><p>We apologize for this error.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jsid.1340","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141583954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of the Society for Information Display
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1