Yi Liu, Yuqing Qiu, Jiaqi Dong, Bo-Ru Yang, Zong Qin
A crucial requirement of augmented reality head-up displays (AR-HUDs) is continuously adjustable virtual image distance (VID), which allows adaptation to various depths in road environments and thereby avoids visual fatigue. However, usual varifocal components for near-eye displays are unavailable because AR-HUDs require the varifocal component's aperture to be larger than 10 cm. This study considers the Alvarez lenses, which change the optical power by in-plane sliding two freeform lenses. Under the paraxial assumption, classic Alvarez lenses can create a quadratic wavefront profile, but the large aperture and extensive diopter variation range required by AR-HUDs lead to significant aberrations. Thus, the classic paraxial Alvarez lens design is extended by co-optimizing Alvarez lenses with high-order surface profiles and a primary freeform mirror. Therefore, a novel varifocal AR-HUD containing Alvarez lenses with apertures larger than 15 cm is proposed. The AR-HUD generates a varifocal plane whose VID can be continuously adjusted between 2.5 and 7.5 m, and another focal plane with a fixed VID at 7.5 m. In addition, merely one display panel is used for compactness. Finally, an AR-HUD prototype with a reduced volume of 9.8 L was built. The expected varifocal performance and qualified imaging quality were experimentally verified through the field of view, VID, and image sharpness.
{"title":"A varifocal augmented reality head-up display using Alvarez freeform lenses","authors":"Yi Liu, Yuqing Qiu, Jiaqi Dong, Bo-Ru Yang, Zong Qin","doi":"10.1002/jsid.1286","DOIUrl":"10.1002/jsid.1286","url":null,"abstract":"<p>A crucial requirement of augmented reality head-up displays (AR-HUDs) is continuously adjustable virtual image distance (VID), which allows adaptation to various depths in road environments and thereby avoids visual fatigue. However, usual varifocal components for near-eye displays are unavailable because AR-HUDs require the varifocal component's aperture to be larger than 10 cm. This study considers the Alvarez lenses, which change the optical power by in-plane sliding two freeform lenses. Under the paraxial assumption, classic Alvarez lenses can create a quadratic wavefront profile, but the large aperture and extensive diopter variation range required by AR-HUDs lead to significant aberrations. Thus, the classic paraxial Alvarez lens design is extended by co-optimizing Alvarez lenses with high-order surface profiles and a primary freeform mirror. Therefore, a novel varifocal AR-HUD containing Alvarez lenses with apertures larger than 15 cm is proposed. The AR-HUD generates a varifocal plane whose VID can be continuously adjusted between 2.5 and 7.5 m, and another focal plane with a fixed VID at 7.5 m. In addition, merely one display panel is used for compactness. Finally, an AR-HUD prototype with a reduced volume of 9.8 L was built. The expected varifocal performance and qualified imaging quality were experimentally verified through the field of view, VID, and image sharpness.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 5","pages":"226-236"},"PeriodicalIF":2.3,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140662638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Perceptually natural standard-dynamic-range (SDR) images reproduced under normal viewing conditions should retain enough information for the human observer to estimate the time at which the actual high-dynamic-range (HDR) scene was captured without recourse to artificial information. Currently, both global- and local-tone mapping operators (TMOs) seem to have comparable levels of performance. Therefore, we first consider the constraints created in the actual human visual system by eye movement, and buttress a hypothesis with a demonstration. We briefly review the imperceptible illuminance effects yielded by the personal circadian clock suggested by chronophysiological research and other related effects, because our previous study suggested that the characteristics of the human visual system dynamically varies depending on the individual's circadian pattern. Finally, we conduct two psychophysical experiments based on the hypothesis that the human visual system employs several global TMOs at the first stage for information compression that depend on individual-circadian-visual-features (ICVF). The results suggest that (1) no participant can perceive actual-capture-time (ACT) and (2) sensitive observers can discriminate reproduced images based on virtual-shooting-time (VST) effects induced by different types of global TMOs. We also discover that the VST-based discrimination differs widely among people, but most are unaware of this effect as evidenced by daily conversations.
{"title":"Next generation personalized display systems employing adaptive dynamic-range compression techniques to address diversity in individual circadian visual features","authors":"Sakuichi Ohtsuka, Saki Iwaida, Yuichiro Orita, Shoko Hira, Masayuki Kashima","doi":"10.1002/jsid.1277","DOIUrl":"10.1002/jsid.1277","url":null,"abstract":"<p>Perceptually natural standard-dynamic-range (SDR) images reproduced under normal viewing conditions should retain enough information for the human observer to estimate the time at which the actual high-dynamic-range (HDR) scene was captured without recourse to artificial information. Currently, both global- and local-tone mapping operators (TMOs) seem to have comparable levels of performance. Therefore, we first consider the constraints created in the actual human visual system by eye movement, and buttress a hypothesis with a demonstration. We briefly review the imperceptible illuminance effects yielded by the personal circadian clock suggested by chronophysiological research and other related effects, because our previous study suggested that the characteristics of the human visual system dynamically varies depending on the individual's circadian pattern. Finally, we conduct two psychophysical experiments based on the hypothesis that the human visual system employs several global TMOs at the first stage for information compression that depend on individual-circadian-visual-features (ICVF). The results suggest that (1) no participant can perceive actual-capture-time (ACT) and (2) sensitive observers can discriminate reproduced images based on virtual-shooting-time (VST) effects induced by different types of global TMOs. We also discover that the VST-based discrimination differs widely among people, but most are unaware of this effect as evidenced by daily conversations.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 6","pages":"462-483"},"PeriodicalIF":2.3,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jsid.1277","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140694508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wang Xinyu, Lu Zhiyong, Kuang Guofeng, Tang Guofu, Liu Chao, Zhang Qinquan, Lian Qiaozhen, Huang Xuerun
We develop a measurement and evaluation system to quantify the halo effect of mini-light-emitting diode (LED) backlight liquid crystal displays (mLCDs). The validity and reliability of our halo measurement system was investigated through a human visual perception experiment. The results indicate that our halo measurement system can effectively distinguish the halo differences among different displays, with matching rate of 93.3% between our measurement and the human visual system.
{"title":"Halo effect measurement for mini-light-emitting diode backlight liquid crystal displays","authors":"Wang Xinyu, Lu Zhiyong, Kuang Guofeng, Tang Guofu, Liu Chao, Zhang Qinquan, Lian Qiaozhen, Huang Xuerun","doi":"10.1002/jsid.1278","DOIUrl":"https://doi.org/10.1002/jsid.1278","url":null,"abstract":"<p>We develop a measurement and evaluation system to quantify the halo effect of mini-light-emitting diode (LED) backlight liquid crystal displays (mLCDs). The validity and reliability of our halo measurement system was investigated through a human visual perception experiment. The results indicate that our halo measurement system can effectively distinguish the halo differences among different displays, with matching rate of 93.3% between our measurement and the human visual system.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 4","pages":"136-148"},"PeriodicalIF":2.3,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140559588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ya-Hsiang Tai, Yi-Cheng Yuan, Chih-Yang Chen, Te-Yu Lee
In this paper, we present a novel approach to counter the influence of ambient light on photodetectors used in applications like biometric recognition and environmental sensing. The proposed solution introduces a circuit-based technique that utilizes signal differencing to subtract ambient light signals before they reach the integrated circuit (IC). The process involves row and column differential signals, akin to analog circuit differential amplifiers. Simulations validate the circuit's functionality, showing its effectiveness in reducing ambient light impact. However, image reconstruction after differencing introduces blurriness due to the accumulation of noise. An alternative bidirectional fusion method is suggested, resulting in a clearer representation of features without noise accumulation. This innovative approach promises to enhance photodetector performance in challenging lighting conditions for various applications.
{"title":"In-panel ambient light eliminating differential circuit applied to active pixel fingerprint sensor","authors":"Ya-Hsiang Tai, Yi-Cheng Yuan, Chih-Yang Chen, Te-Yu Lee","doi":"10.1002/jsid.1279","DOIUrl":"https://doi.org/10.1002/jsid.1279","url":null,"abstract":"<p>In this paper, we present a novel approach to counter the influence of ambient light on photodetectors used in applications like biometric recognition and environmental sensing. The proposed solution introduces a circuit-based technique that utilizes signal differencing to subtract ambient light signals before they reach the integrated circuit (IC). The process involves row and column differential signals, akin to analog circuit differential amplifiers. Simulations validate the circuit's functionality, showing its effectiveness in reducing ambient light impact. However, image reconstruction after differencing introduces blurriness due to the accumulation of noise. An alternative bidirectional fusion method is suggested, resulting in a clearer representation of features without noise accumulation. This innovative approach promises to enhance photodetector performance in challenging lighting conditions for various applications.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 4","pages":"149-158"},"PeriodicalIF":2.3,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140559517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today's display industry faces transistor-level challenges similar to those of complementary metal-oxide semiconductor (CMOS) metal-oxide semiconductor field-effect transistors (MOSFETs) in the mid-1990s. Learnings from MOSFETs inform the display industry's response to the limitations of silicon-based thin-film transistors (TFTs). Improvements sustaining Moore's Law drove the need to rethink MOSFET materials and structures. The display industry needs fundamental innovation at the device level. New thin-film devices enable an inflection point in the use of displays, just as fin field-effect transistor (FinFET) defined the inflection point in CMOS in the 2000s. This paper outlines two innovations in thin-film device technology that offers improvement in image quality and power consumption of flat panel displays: amorphous metal gate TFTs (AMeTFTs) and amorphous metal nonlinear resistors (AMNRs). Linked through a single core material set based on mass-producible, thin-film amorphous metals, these two innovations create near- and long-term roadmaps simplifying the production of high-image quality, low-power consumption displays on glass (now) and plastic (future). In particular, the field-effect mobility of indium gallium zinc oxide (IGZO) AMeTFTs (55–72 cm2/Vs) exceeds that of IGZO TFTs developed by existing display manufacturers without the need for atomic layer deposition or vertical stacking of heterostructure semiconductor films, making AMeTFTs a natural choice for the new G8.5–G8.7 fabs targeting IGZO backplanes.
{"title":"Innovations in thin-film electronics for the new generation of displays","authors":"Andre Zeumault, Jose E. Mendez, John Brewer","doi":"10.1002/jsid.1274","DOIUrl":"10.1002/jsid.1274","url":null,"abstract":"<p>Today's display industry faces transistor-level challenges similar to those of complementary metal-oxide semiconductor (CMOS) metal-oxide semiconductor field-effect transistors (MOSFETs) in the mid-1990s. Learnings from MOSFETs inform the display industry's response to the limitations of silicon-based thin-film transistors (TFTs). Improvements sustaining Moore's Law drove the need to rethink MOSFET materials and structures. The display industry needs fundamental innovation at the device level. New thin-film devices enable an inflection point in the use of displays, just as fin field-effect transistor (FinFET) defined the inflection point in CMOS in the 2000s. This paper outlines two innovations in thin-film device technology that offers improvement in image quality and power consumption of flat panel displays: amorphous metal gate TFTs (AMeTFTs) and amorphous metal nonlinear resistors (AMNRs). Linked through a single core material set based on mass-producible, thin-film amorphous metals, these two innovations create near- and long-term roadmaps simplifying the production of high-image quality, low-power consumption displays on glass (now) and plastic (future). In particular, the field-effect mobility of indium gallium zinc oxide (IGZO) AMeTFTs (55–72 cm<sup>2</sup>/Vs) exceeds that of IGZO TFTs developed by existing display manufacturers without the need for atomic layer deposition or vertical stacking of heterostructure semiconductor films, making AMeTFTs a natural choice for the new G8.5–G8.7 fabs targeting IGZO backplanes.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 4","pages":"121-135"},"PeriodicalIF":2.3,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jsid.1274","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140382078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a novel aerial display system that reconstructs face orientation. The proposed system forms two face images floating in mid-air. Viewers observe a spatially blended image of the two face images, where the spatial blending ratio depends on the viewing position. Thus, the spatially blended aerial face image is perceived to look in a fixed orientation even if the viewing position is changed within a certain viewing range. We analyze the spatial blending system optical design and show results from our prototype display.
{"title":"Aerial display that reconstructs face orientation by use of spatial blending of two face images","authors":"Kohei Kishinami, Keigo Sato, Masaki Yasugi, Shiro Suyama, Hirotsugu Yamamoto","doi":"10.1002/jsid.1273","DOIUrl":"https://doi.org/10.1002/jsid.1273","url":null,"abstract":"<p>This paper proposes a novel aerial display system that reconstructs face orientation. The proposed system forms two face images floating in mid-air. Viewers observe a spatially blended image of the two face images, where the spatial blending ratio depends on the viewing position. Thus, the spatially blended aerial face image is perceived to look in a fixed orientation even if the viewing position is changed within a certain viewing range. We analyze the spatial blending system optical design and show results from our prototype display.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 3","pages":"101-111"},"PeriodicalIF":2.3,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140188501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ren-Jun Choong, Wun-She Yap, Yan Chai Hum, Khin Wee Lai, Lloyd Ling, Anthony Vodacek, Yee Kai Tee
The visual microphone is a technique for remote sound recovery that extracts sound information from tiny pixel-scale vibrations in a video. Despite having demonstrated success in sound recovery, the impact of various visual enhancement and color conversion algorithms applied on the video before the sound recovery process has not been explored. Thus, it is important to investigate these effects have on the recovered sound quality, as the vibrations are so small the effects play an important role. This work experimented with different color to grayscale conversions and visual enhancement algorithms on 576 videos, and found that the recovered sound quality is indeed greatly affected by the choice of algorithms. The best conversion algorithms were found to be the average of the red, green and blue color channels and the perceptual lightness in the CIELAB color space, improving the recovered sound quality by up to 23.22%. Furthermore, visual enhancement techniques such as gamma correction have been found to corrupt vibration information, leading to a 22.47% drop in recovered sound quality in one of the tested videos. Therefore, it is advisable to avoid or minimize the use of visual enhancement techniques for remote sound recovery to prevent the elimination of useful subtle vibrations.
{"title":"Impact of visual enhancement and color conversion algorithms on remote sound recovery from silent videos","authors":"Ren-Jun Choong, Wun-She Yap, Yan Chai Hum, Khin Wee Lai, Lloyd Ling, Anthony Vodacek, Yee Kai Tee","doi":"10.1002/jsid.1275","DOIUrl":"https://doi.org/10.1002/jsid.1275","url":null,"abstract":"<p>The visual microphone is a technique for remote sound recovery that extracts sound information from tiny pixel-scale vibrations in a video. Despite having demonstrated success in sound recovery, the impact of various visual enhancement and color conversion algorithms applied on the video before the sound recovery process has not been explored. Thus, it is important to investigate these effects have on the recovered sound quality, as the vibrations are so small the effects play an important role. This work experimented with different color to grayscale conversions and visual enhancement algorithms on 576 videos, and found that the recovered sound quality is indeed greatly affected by the choice of algorithms. The best conversion algorithms were found to be the average of the red, green and blue color channels and the perceptual lightness in the CIELAB color space, improving the recovered sound quality by up to 23.22%. Furthermore, visual enhancement techniques such as gamma correction have been found to corrupt vibration information, leading to a 22.47% drop in recovered sound quality in one of the tested videos. Therefore, it is advisable to avoid or minimize the use of visual enhancement techniques for remote sound recovery to prevent the elimination of useful subtle vibrations.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 3","pages":"112-125"},"PeriodicalIF":2.3,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140188493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a dynamic information fusion interactive system with a transparent display applied in the vehicle. The dynamic information fusion interactive system integrates three leading technologies: relative position acquisition, fusion information mapping, and visual field tracking adaptive information generation. For visual field tracking adaptive information generation, the position of fusion information changes as the observer's field of view varies to help observers use this system more comfortably and intuitively. The frame rate of the dynamic virtual information display can be increased from 10 to 20 Hz without increasing the AI computing power and providing observers with a better observer experience. Integrating those technologies could achieve an AR-based interactive system on a direct-view transparent display. Such a system can provide passengers with location-based interactive virtual and real integration information in the mobile field.
本文提出了一种应用于车载透明显示屏的动态信息融合交互系统。该动态信息融合交互系统集成了三项领先技术:相对位置采集、融合信息映射和视场跟踪自适应信息生成。在视场跟踪自适应信息生成方面,融合信息的位置会随着观察者视野的变化而变化,以帮助观察者更舒适、更直观地使用该系统。在不增加人工智能计算能力的情况下,动态虚拟信息显示的帧速率可从 10 Hz 提高到 20 Hz,从而为观察者提供更好的观察体验。整合这些技术可以在直视透明显示屏上实现基于 AR 的交互系统。这样的系统可以在移动领域为乘客提供基于位置的交互式虚拟和真实集成信息。
{"title":"Fusion information refresh rate improvement based on adaptive visual tracking in vehicle augmented reality sightseeing interactive system","authors":"Yu-Hsiang Tsai, Hong-Ming Dai, Yung-Jhe Yan, Mang Ou-Yang","doi":"10.1002/jsid.1276","DOIUrl":"10.1002/jsid.1276","url":null,"abstract":"<p>This paper proposes a dynamic information fusion interactive system with a transparent display applied in the vehicle. The dynamic information fusion interactive system integrates three leading technologies: relative position acquisition, fusion information mapping, and visual field tracking adaptive information generation. For visual field tracking adaptive information generation, the position of fusion information changes as the observer's field of view varies to help observers use this system more comfortably and intuitively. The frame rate of the dynamic virtual information display can be increased from 10 to 20 Hz without increasing the AI computing power and providing observers with a better observer experience. Integrating those technologies could achieve an AR-based interactive system on a direct-view transparent display. Such a system can provide passengers with location-based interactive virtual and real integration information in the mobile field.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 8","pages":"545-554"},"PeriodicalIF":1.7,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140255075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takumi Hori, Kayo Yoshimoto, Goro Hamagishi, Hideya Takahashi
We have previously proposed the parallax barrier autostereoscopic display with eye tracking control. This display realized a wide viewing area. However, the previous method simplifies the process by using the average interocular distance and composing images corresponding to the center position of both eyes. As a result, crosstalk caused by individual differences in interocular distance and changes in viewing condition, such as facial tilt and rotation, was occurred. The crosstalk caused by individual differences in interocular distance is an important factor that must be eliminated. Therefore, we propose the method to compose a parallax image by using positions of both eyes to expand the viewing zone. We use the black images to the crosstalk subpixels, which can be determined from the positions of each eye and achieves more comfortable stereoscopic viewing. To verify the effectiveness of the proposed method, we constructed the prototype 3D display using the parallax barrier with 50% aperture ratio and confirmed the wide viewing area with the crosstalk ratio of less than 5%. We can achieve an autostereoscopic display that has bright and high-quality stereoscopic image and independent of individual differences in interocular distance and changes in viewing condition.
我们曾提出过带有眼球跟踪控制功能的视差屏障自动立体显示器。这种显示器实现了宽广的可视区域。但是,以前的方法通过使用平均眼间距和合成与双眼中心位置相对应的图像来简化过程。因此,由于个体眼间距的差异和观看条件的变化(如面部倾斜和旋转)而导致的串扰时有发生。眼间距离的个体差异造成的串扰是必须消除的一个重要因素。因此,我们提出了利用双眼位置来扩大观察区域,从而合成视差图像的方法。我们将黑色图像用于串扰子像素,这可以根据每只眼睛的位置来确定,并实现更舒适的立体观看。为了验证所提方法的有效性,我们利用视差屏障构建了光圈比为 50%的 3D 显示器原型,并确认了串扰比小于 5%的宽广可视区域。我们可以实现自动立体显示,它具有明亮、高质量的立体图像,并且不受个体眼间距差异和观看条件变化的影响。
{"title":"Two-view autostereoscopic display independent of differences of interocular distance and viewing condition","authors":"Takumi Hori, Kayo Yoshimoto, Goro Hamagishi, Hideya Takahashi","doi":"10.1002/jsid.1272","DOIUrl":"https://doi.org/10.1002/jsid.1272","url":null,"abstract":"<p>We have previously proposed the parallax barrier autostereoscopic display with eye tracking control. This display realized a wide viewing area. However, the previous method simplifies the process by using the average interocular distance and composing images corresponding to the center position of both eyes. As a result, crosstalk caused by individual differences in interocular distance and changes in viewing condition, such as facial tilt and rotation, was occurred. The crosstalk caused by individual differences in interocular distance is an important factor that must be eliminated. Therefore, we propose the method to compose a parallax image by using positions of both eyes to expand the viewing zone. We use the black images to the crosstalk subpixels, which can be determined from the positions of each eye and achieves more comfortable stereoscopic viewing. To verify the effectiveness of the proposed method, we constructed the prototype 3D display using the parallax barrier with 50% aperture ratio and confirmed the wide viewing area with the crosstalk ratio of less than 5%. We can achieve an autostereoscopic display that has bright and high-quality stereoscopic image and independent of individual differences in interocular distance and changes in viewing condition.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 3","pages":"89-100"},"PeriodicalIF":2.3,"publicationDate":"2024-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140188536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Menglan Xie, Huiqing Pang, Jing Wang, Zhihao Cui, Hualong Ding, Renjie Zheng, Ray Kwong, Sean Xia
Charge balance is one of the most important factors for realizing high performance organic light emitting devices (OLEDs). In this work, we provide a novel strategy to improve the charge balance in OLEDs by optimizing the hole injection layer (HIL) as well as the electron transporting layer (ETL) and thereby controlling the charge carrier supplies in the device. First, we develop a p-dopant material (PD02), with a lowest unoccupied molecular orbit (LUMO) of −4.63 eV, much shallower than that of the commercial material (PD01) of which the LUMO is −5.04 eV. Nevertheless, this enables us to modulate the supply of holes to the emissive layer through tuning doping concentration. We demonstrate that device performances are significantly improved by employing such a scheme. With a 23% molar doping of PD02, a bottom emission red OLED achieves an external quantum efficiency (EQE) of over 30%, an operating voltage of 3.4 V and a LT95 ~15,000 h at 10 mA/cm2, with a Digital Cinema Initiative P3 (DCI-P3) chromaticity of CIE (X, Y) = (0.68, 0.32). Moreover, the efficiency roll-off is suppressed up till ~3500 cd/m2, a desirable feature in display applications. The lateral conductivity of by using such HIL is also found to be much lower than that of PD01, resulting in reduced crosstalk among RGB pixels. Next, a new electron transporting material (ETM-02) with a deep LUMO of −2.86 eV is also introduced to further optimize the charge balance. Although devices with ETM-02 shows lower voltage and higher EQE, lifetime is compromised. In order to improve lifetime, additional fine tuning of the charge balance is essential. Finally, a second p-dopant PD03 with a LUMO of −4.91 eV is added to the HIL to further extend the modulation flexibility in the hole injection. A double-layer HIL consisting of 8 nm of HTM:16% PD02 and 2 nm of HTM:3% PD03, where the former is in contact with anode, is adopted in the device structure. The bottom emission deep red device achieve EQE over 30%, an operating voltage of 3.2 V and an improved LT95 ~13,000 h at 10 mA/cm2 with a BT.2020 range chromaticity of CIE (X, Y) = (0.701, 0.299). In the double HIL configuration, the introduction of PD03 provides one more parameter for tuning and therefore improves the overall device performances.
{"title":"Charge balance in OLEDs: Optimization of hole injection layer using novel p-dopants","authors":"Menglan Xie, Huiqing Pang, Jing Wang, Zhihao Cui, Hualong Ding, Renjie Zheng, Ray Kwong, Sean Xia","doi":"10.1002/jsid.1271","DOIUrl":"https://doi.org/10.1002/jsid.1271","url":null,"abstract":"<p>Charge balance is one of the most important factors for realizing high performance organic light emitting devices (OLEDs). In this work, we provide a novel strategy to improve the charge balance in OLEDs by optimizing the hole injection layer (HIL) as well as the electron transporting layer (ETL) and thereby controlling the charge carrier supplies in the device. First, we develop a p-dopant material (PD02), with a lowest unoccupied molecular orbit (LUMO) of −4.63 eV, much shallower than that of the commercial material (PD01) of which the LUMO is −5.04 eV. Nevertheless, this enables us to modulate the supply of holes to the emissive layer through tuning doping concentration. We demonstrate that device performances are significantly improved by employing such a scheme. With a 23% molar doping of PD02, a bottom emission red OLED achieves an external quantum efficiency (EQE) of over 30%, an operating voltage of 3.4 V and a LT95 ~15,000 h at 10 mA/cm<sup>2</sup>, with a Digital Cinema Initiative P3 (DCI-P3) chromaticity of CIE (X, Y) = (0.68, 0.32). Moreover, the efficiency roll-off is suppressed up till ~3500 cd/m<sup>2</sup>, a desirable feature in display applications. The lateral conductivity of by using such HIL is also found to be much lower than that of PD01, resulting in reduced crosstalk among RGB pixels. Next, a new electron transporting material (ETM-02) with a deep LUMO of −2.86 eV is also introduced to further optimize the charge balance. Although devices with ETM-02 shows lower voltage and higher EQE, lifetime is compromised. In order to improve lifetime, additional fine tuning of the charge balance is essential. Finally, a second p-dopant PD03 with a LUMO of −4.91 eV is added to the HIL to further extend the modulation flexibility in the hole injection. A double-layer HIL consisting of 8 nm of HTM:16% PD02 and 2 nm of HTM:3% PD03, where the former is in contact with anode, is adopted in the device structure. The bottom emission deep red device achieve EQE over 30%, an operating voltage of 3.2 V and an improved LT<sub>95</sub> ~13,000 h at 10 mA/cm<sup>2</sup> with a BT.2020 range chromaticity of CIE (X, Y) = (0.701, 0.299). In the double HIL configuration, the introduction of PD03 provides one more parameter for tuning and therefore improves the overall device performances.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 2","pages":"71-81"},"PeriodicalIF":2.3,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139976553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}