Donghyun Lee, Jaehak Lee, Dongkyun Seo, Yangho Jung, Hyunsup Lee, Donghwan Kong, Sijoon Song
We developed a novel method to minimize the bezel of flexible displays through backside bonding of a chip on film, resulting in the bezel width of less than 500 μm as compared to 1000 μm of conventional displays. The metal embedded in polyimide (MEP) layer is placed between the first and second polyimide (PI) substrates and connected to the metal lines of the backplane via the MEP contact (M-CNT) hole. Subsequently, the nonconductive film (NCF) bonding and intense pulsed light sintering are performed using conductive ink. Conductive ink as the interconnect material capable of low-temperature sintering is applied to avert thermal degradation and crack. At a high temperature (65°C) and humidity (90% relative humidity), the contact resistance was a drivable level for the display after 240 h. The normalized strain in the M-CNT hole and MEP area were less than 0.4, indicating the absence of cracks during the NCF bonding. These results demonstrated that the backside bonding method was suitable for extremely narrow bezels of the next-generation flexible displays.
{"title":"Backside bonding for extremely narrow bezel at the bottom of flexible displays","authors":"Donghyun Lee, Jaehak Lee, Dongkyun Seo, Yangho Jung, Hyunsup Lee, Donghwan Kong, Sijoon Song","doi":"10.1002/jsid.1284","DOIUrl":"https://doi.org/10.1002/jsid.1284","url":null,"abstract":"<p>We developed a novel method to minimize the bezel of flexible displays through backside bonding of a chip on film, resulting in the bezel width of less than 500 μm as compared to 1000 μm of conventional displays. The metal embedded in polyimide (MEP) layer is placed between the first and second polyimide (PI) substrates and connected to the metal lines of the backplane via the MEP contact (M-CNT) hole. Subsequently, the nonconductive film (NCF) bonding and intense pulsed light sintering are performed using conductive ink. Conductive ink as the interconnect material capable of low-temperature sintering is applied to avert thermal degradation and crack. At a high temperature (65°C) and humidity (90% relative humidity), the contact resistance was a drivable level for the display after 240 h. The normalized strain in the M-CNT hole and MEP area were less than 0.4, indicating the absence of cracks during the NCF bonding. These results demonstrated that the backside bonding method was suitable for extremely narrow bezels of the next-generation flexible displays.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141091563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vergence-accommodation conflicts (VAC) occur in near-eye displays when the binocular disparity of the 3D rendered content (vergence) does not match the display focal distance (accommodation). VAC has been shown to reduce perceptual image quality, cognitive performance, and oculomotor coordination. In this study, we aimed to investigate the impact of VAC on visual performance in augmented reality (AR). Specifically, we quantified the impact of AR VAC on the ‘Time to Focus’ (TTF); when the user switches focus between real-world content and world-locked AR-rendered content. Our results show that TTF increases exponentially with VAC. The increase is more pronounced at closer vergence distances in displays with a focal distance of 1 D or longer. Finally, we showed that VAC may have a differential effect across age groups; specifically, older users may be affected more in closer focal and longer vergence distances.
当三维渲染内容的双眼差距(辐辏)与显示焦距(调节)不一致时,近眼显示屏就会出现辐辏-调节冲突(VAC)。研究表明,VAC 会降低感知图像质量、认知能力和眼球运动协调能力。在这项研究中,我们旨在调查 VAC 对增强现实(AR)中视觉表现的影响。具体来说,我们量化了 AR VAC 对 "聚焦时间"(TTF)的影响;当用户在现实世界的内容和锁定世界的 AR 渲染内容之间切换焦点时,TTF 会发生变化。我们的结果表明,TTF 会随着 VAC 的增加而呈指数增长。在焦距为 1 D 或更远的显示器中,当辐辏距离更近时,TTF 的增加更为明显。最后,我们还发现,VAC 可能会对不同年龄段的用户产生不同的影响;具体来说,年龄较大的用户在焦距较近和辐辏距离较长的情况下受到的影响可能更大。
{"title":"Vergence-accommodation conflict increases time to focus in augmented reality","authors":"Daniel P. Spiegel, Ian M. Erkelens","doi":"10.1002/jsid.1283","DOIUrl":"10.1002/jsid.1283","url":null,"abstract":"<p>Vergence-accommodation conflicts (VAC) occur in near-eye displays when the binocular disparity of the 3D rendered content (vergence) does not match the display focal distance (accommodation). VAC has been shown to reduce perceptual image quality, cognitive performance, and oculomotor coordination. In this study, we aimed to investigate the impact of VAC on visual performance in augmented reality (AR). Specifically, we quantified the impact of AR VAC on the ‘Time to Focus’ (TTF); when the user switches focus between real-world content and world-locked AR-rendered content. Our results show that TTF increases exponentially with VAC. The increase is more pronounced at closer vergence distances in displays with a focal distance of 1 D or longer. Finally, we showed that VAC may have a differential effect across age groups; specifically, older users may be affected more in closer focal and longer vergence distances.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jsid.1283","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140655877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the quest for more compact and efficient augmented reality (AR) displays, the standard approach often necessitates the use of multiple layers to facilitate a large full-color field of view (FoV). Here, we delve into the constraints of FoV in single-layer, full-color waveguide-based AR displays, uncovering the critical roles played by the waveguide's refractive index, the exit pupil expansion (EPE) scheme, and the combiner's angular response in dictating these limitations. Through detailed analysis, we introduce an innovative approach, featuring an optimized butterfly EPE scheme coupled with gradient-pitch polarization volume gratings (PVGs). This novel configuration successfully achieves a theoretical diagonal FoV of 54.06° while maintaining a 16:10 aspect ratio.
{"title":"Full-color, wide field-of-view single-layer waveguide for augmented reality displays","authors":"Qian Yang, Yuqiang Ding, Shin-Tson Wu","doi":"10.1002/jsid.1288","DOIUrl":"10.1002/jsid.1288","url":null,"abstract":"<p>In the quest for more compact and efficient augmented reality (AR) displays, the standard approach often necessitates the use of multiple layers to facilitate a large full-color field of view (FoV). Here, we delve into the constraints of FoV in single-layer, full-color waveguide-based AR displays, uncovering the critical roles played by the waveguide's refractive index, the exit pupil expansion (EPE) scheme, and the combiner's angular response in dictating these limitations. Through detailed analysis, we introduce an innovative approach, featuring an optimized butterfly EPE scheme coupled with gradient-pitch polarization volume gratings (PVGs). This novel configuration successfully achieves a theoretical diagonal FoV of 54.06° while maintaining a 16:10 aspect ratio.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140664142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, a highly efficient photosensitive quantum dot (QD) system was designed. The optimized photosensitive QD system had high photoluminescence quantum yield and colloidal stability. By direct photolithography, RGB pixel arrays with a single sub-pixel size of 39 μm × 5 μm were successfully prepared. Further, the full-color QLED device was realized. There were no residual QD emission peaks from neighboring sub-pixels observed in the electroluminescence spectra. Experience on the full-color QLED device guided the successful preparation of a 4.7-inch 650 PPI active matrix QLED prototype. The active matrix QLED prototype could display clear and complete pictures. The color gamut reached 85% of the BT2020 standard. This is the first active matrix QLED prototype prepared with a record-high resolution by direct photolithography, which promoted the development of QLED display technology.
{"title":"A 4.7-inch 650 PPI AMQLED display prepared by direct photolithography","authors":"Di Zhang, Zhuo Li, Shaoyong Lu, Dong Li, Zhuo Chen, Yanzhao Li, Xinguo Li, Xiaoguang Xu","doi":"10.1002/jsid.1281","DOIUrl":"10.1002/jsid.1281","url":null,"abstract":"<p>In this work, a highly efficient photosensitive quantum dot (QD) system was designed. The optimized photosensitive QD system had high photoluminescence quantum yield and colloidal stability. By direct photolithography, RGB pixel arrays with a single sub-pixel size of 39 μm × 5 μm were successfully prepared. Further, the full-color QLED device was realized. There were no residual QD emission peaks from neighboring sub-pixels observed in the electroluminescence spectra. Experience on the full-color QLED device guided the successful preparation of a 4.7-inch 650 PPI active matrix QLED prototype. The active matrix QLED prototype could display clear and complete pictures. The color gamut reached 85% of the BT2020 standard. This is the first active matrix QLED prototype prepared with a record-high resolution by direct photolithography, which promoted the development of QLED display technology.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140662220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Liu, Yuqing Qiu, Jiaqi Dong, Bo-Ru Yang, Zong Qin
A crucial requirement of augmented reality head-up displays (AR-HUDs) is continuously adjustable virtual image distance (VID), which allows adaptation to various depths in road environments and thereby avoids visual fatigue. However, usual varifocal components for near-eye displays are unavailable because AR-HUDs require the varifocal component's aperture to be larger than 10 cm. This study considers the Alvarez lenses, which change the optical power by in-plane sliding two freeform lenses. Under the paraxial assumption, classic Alvarez lenses can create a quadratic wavefront profile, but the large aperture and extensive diopter variation range required by AR-HUDs lead to significant aberrations. Thus, the classic paraxial Alvarez lens design is extended by co-optimizing Alvarez lenses with high-order surface profiles and a primary freeform mirror. Therefore, a novel varifocal AR-HUD containing Alvarez lenses with apertures larger than 15 cm is proposed. The AR-HUD generates a varifocal plane whose VID can be continuously adjusted between 2.5 and 7.5 m, and another focal plane with a fixed VID at 7.5 m. In addition, merely one display panel is used for compactness. Finally, an AR-HUD prototype with a reduced volume of 9.8 L was built. The expected varifocal performance and qualified imaging quality were experimentally verified through the field of view, VID, and image sharpness.
{"title":"A varifocal augmented reality head-up display using Alvarez freeform lenses","authors":"Yi Liu, Yuqing Qiu, Jiaqi Dong, Bo-Ru Yang, Zong Qin","doi":"10.1002/jsid.1286","DOIUrl":"10.1002/jsid.1286","url":null,"abstract":"<p>A crucial requirement of augmented reality head-up displays (AR-HUDs) is continuously adjustable virtual image distance (VID), which allows adaptation to various depths in road environments and thereby avoids visual fatigue. However, usual varifocal components for near-eye displays are unavailable because AR-HUDs require the varifocal component's aperture to be larger than 10 cm. This study considers the Alvarez lenses, which change the optical power by in-plane sliding two freeform lenses. Under the paraxial assumption, classic Alvarez lenses can create a quadratic wavefront profile, but the large aperture and extensive diopter variation range required by AR-HUDs lead to significant aberrations. Thus, the classic paraxial Alvarez lens design is extended by co-optimizing Alvarez lenses with high-order surface profiles and a primary freeform mirror. Therefore, a novel varifocal AR-HUD containing Alvarez lenses with apertures larger than 15 cm is proposed. The AR-HUD generates a varifocal plane whose VID can be continuously adjusted between 2.5 and 7.5 m, and another focal plane with a fixed VID at 7.5 m. In addition, merely one display panel is used for compactness. Finally, an AR-HUD prototype with a reduced volume of 9.8 L was built. The expected varifocal performance and qualified imaging quality were experimentally verified through the field of view, VID, and image sharpness.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140662638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Perceptually natural standard-dynamic-range (SDR) images reproduced under normal viewing conditions should retain enough information for the human observer to estimate the time at which the actual high-dynamic-range (HDR) scene was captured without recourse to artificial information. Currently, both global- and local-tone mapping operators (TMOs) seem to have comparable levels of performance. Therefore, we first consider the constraints created in the actual human visual system by eye movement, and buttress a hypothesis with a demonstration. We briefly review the imperceptible illuminance effects yielded by the personal circadian clock suggested by chronophysiological research and other related effects, because our previous study suggested that the characteristics of the human visual system dynamically varies depending on the individual's circadian pattern. Finally, we conduct two psychophysical experiments based on the hypothesis that the human visual system employs several global TMOs at the first stage for information compression that depend on individual-circadian-visual-features (ICVF). The results suggest that (1) no participant can perceive actual-capture-time (ACT) and (2) sensitive observers can discriminate reproduced images based on virtual-shooting-time (VST) effects induced by different types of global TMOs. We also discover that the VST-based discrimination differs widely among people, but most are unaware of this effect as evidenced by daily conversations.
{"title":"Next generation personalized display systems employing adaptive dynamic-range compression techniques to address diversity in individual circadian visual features","authors":"Sakuichi Ohtsuka, Saki Iwaida, Yuichiro Orita, Shoko Hira, Masayuki Kashima","doi":"10.1002/jsid.1277","DOIUrl":"10.1002/jsid.1277","url":null,"abstract":"<p>Perceptually natural standard-dynamic-range (SDR) images reproduced under normal viewing conditions should retain enough information for the human observer to estimate the time at which the actual high-dynamic-range (HDR) scene was captured without recourse to artificial information. Currently, both global- and local-tone mapping operators (TMOs) seem to have comparable levels of performance. Therefore, we first consider the constraints created in the actual human visual system by eye movement, and buttress a hypothesis with a demonstration. We briefly review the imperceptible illuminance effects yielded by the personal circadian clock suggested by chronophysiological research and other related effects, because our previous study suggested that the characteristics of the human visual system dynamically varies depending on the individual's circadian pattern. Finally, we conduct two psychophysical experiments based on the hypothesis that the human visual system employs several global TMOs at the first stage for information compression that depend on individual-circadian-visual-features (ICVF). The results suggest that (1) no participant can perceive actual-capture-time (ACT) and (2) sensitive observers can discriminate reproduced images based on virtual-shooting-time (VST) effects induced by different types of global TMOs. We also discover that the VST-based discrimination differs widely among people, but most are unaware of this effect as evidenced by daily conversations.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jsid.1277","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140694508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wang Xinyu, Lu Zhiyong, Kuang Guofeng, Tang Guofu, Liu Chao, Zhang Qinquan, Lian Qiaozhen, Huang Xuerun
We develop a measurement and evaluation system to quantify the halo effect of mini-light-emitting diode (LED) backlight liquid crystal displays (mLCDs). The validity and reliability of our halo measurement system was investigated through a human visual perception experiment. The results indicate that our halo measurement system can effectively distinguish the halo differences among different displays, with matching rate of 93.3% between our measurement and the human visual system.
{"title":"Halo effect measurement for mini-light-emitting diode backlight liquid crystal displays","authors":"Wang Xinyu, Lu Zhiyong, Kuang Guofeng, Tang Guofu, Liu Chao, Zhang Qinquan, Lian Qiaozhen, Huang Xuerun","doi":"10.1002/jsid.1278","DOIUrl":"https://doi.org/10.1002/jsid.1278","url":null,"abstract":"<p>We develop a measurement and evaluation system to quantify the halo effect of mini-light-emitting diode (LED) backlight liquid crystal displays (mLCDs). The validity and reliability of our halo measurement system was investigated through a human visual perception experiment. The results indicate that our halo measurement system can effectively distinguish the halo differences among different displays, with matching rate of 93.3% between our measurement and the human visual system.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140559588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ya-Hsiang Tai, Yi-Cheng Yuan, Chih-Yang Chen, Te-Yu Lee
In this paper, we present a novel approach to counter the influence of ambient light on photodetectors used in applications like biometric recognition and environmental sensing. The proposed solution introduces a circuit-based technique that utilizes signal differencing to subtract ambient light signals before they reach the integrated circuit (IC). The process involves row and column differential signals, akin to analog circuit differential amplifiers. Simulations validate the circuit's functionality, showing its effectiveness in reducing ambient light impact. However, image reconstruction after differencing introduces blurriness due to the accumulation of noise. An alternative bidirectional fusion method is suggested, resulting in a clearer representation of features without noise accumulation. This innovative approach promises to enhance photodetector performance in challenging lighting conditions for various applications.
{"title":"In-panel ambient light eliminating differential circuit applied to active pixel fingerprint sensor","authors":"Ya-Hsiang Tai, Yi-Cheng Yuan, Chih-Yang Chen, Te-Yu Lee","doi":"10.1002/jsid.1279","DOIUrl":"https://doi.org/10.1002/jsid.1279","url":null,"abstract":"<p>In this paper, we present a novel approach to counter the influence of ambient light on photodetectors used in applications like biometric recognition and environmental sensing. The proposed solution introduces a circuit-based technique that utilizes signal differencing to subtract ambient light signals before they reach the integrated circuit (IC). The process involves row and column differential signals, akin to analog circuit differential amplifiers. Simulations validate the circuit's functionality, showing its effectiveness in reducing ambient light impact. However, image reconstruction after differencing introduces blurriness due to the accumulation of noise. An alternative bidirectional fusion method is suggested, resulting in a clearer representation of features without noise accumulation. This innovative approach promises to enhance photodetector performance in challenging lighting conditions for various applications.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140559517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today's display industry faces transistor-level challenges similar to those of complementary metal-oxide semiconductor (CMOS) metal-oxide semiconductor field-effect transistors (MOSFETs) in the mid-1990s. Learnings from MOSFETs inform the display industry's response to the limitations of silicon-based thin-film transistors (TFTs). Improvements sustaining Moore's Law drove the need to rethink MOSFET materials and structures. The display industry needs fundamental innovation at the device level. New thin-film devices enable an inflection point in the use of displays, just as fin field-effect transistor (FinFET) defined the inflection point in CMOS in the 2000s. This paper outlines two innovations in thin-film device technology that offers improvement in image quality and power consumption of flat panel displays: amorphous metal gate TFTs (AMeTFTs) and amorphous metal nonlinear resistors (AMNRs). Linked through a single core material set based on mass-producible, thin-film amorphous metals, these two innovations create near- and long-term roadmaps simplifying the production of high-image quality, low-power consumption displays on glass (now) and plastic (future). In particular, the field-effect mobility of indium gallium zinc oxide (IGZO) AMeTFTs (55–72 cm2/Vs) exceeds that of IGZO TFTs developed by existing display manufacturers without the need for atomic layer deposition or vertical stacking of heterostructure semiconductor films, making AMeTFTs a natural choice for the new G8.5–G8.7 fabs targeting IGZO backplanes.
{"title":"Innovations in thin-film electronics for the new generation of displays","authors":"Andre Zeumault, Jose E. Mendez, John Brewer","doi":"10.1002/jsid.1274","DOIUrl":"10.1002/jsid.1274","url":null,"abstract":"<p>Today's display industry faces transistor-level challenges similar to those of complementary metal-oxide semiconductor (CMOS) metal-oxide semiconductor field-effect transistors (MOSFETs) in the mid-1990s. Learnings from MOSFETs inform the display industry's response to the limitations of silicon-based thin-film transistors (TFTs). Improvements sustaining Moore's Law drove the need to rethink MOSFET materials and structures. The display industry needs fundamental innovation at the device level. New thin-film devices enable an inflection point in the use of displays, just as fin field-effect transistor (FinFET) defined the inflection point in CMOS in the 2000s. This paper outlines two innovations in thin-film device technology that offers improvement in image quality and power consumption of flat panel displays: amorphous metal gate TFTs (AMeTFTs) and amorphous metal nonlinear resistors (AMNRs). Linked through a single core material set based on mass-producible, thin-film amorphous metals, these two innovations create near- and long-term roadmaps simplifying the production of high-image quality, low-power consumption displays on glass (now) and plastic (future). In particular, the field-effect mobility of indium gallium zinc oxide (IGZO) AMeTFTs (55–72 cm<sup>2</sup>/Vs) exceeds that of IGZO TFTs developed by existing display manufacturers without the need for atomic layer deposition or vertical stacking of heterostructure semiconductor films, making AMeTFTs a natural choice for the new G8.5–G8.7 fabs targeting IGZO backplanes.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jsid.1274","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140382078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a novel aerial display system that reconstructs face orientation. The proposed system forms two face images floating in mid-air. Viewers observe a spatially blended image of the two face images, where the spatial blending ratio depends on the viewing position. Thus, the spatially blended aerial face image is perceived to look in a fixed orientation even if the viewing position is changed within a certain viewing range. We analyze the spatial blending system optical design and show results from our prototype display.
{"title":"Aerial display that reconstructs face orientation by use of spatial blending of two face images","authors":"Kohei Kishinami, Keigo Sato, Masaki Yasugi, Shiro Suyama, Hirotsugu Yamamoto","doi":"10.1002/jsid.1273","DOIUrl":"https://doi.org/10.1002/jsid.1273","url":null,"abstract":"<p>This paper proposes a novel aerial display system that reconstructs face orientation. The proposed system forms two face images floating in mid-air. Viewers observe a spatially blended image of the two face images, where the spatial blending ratio depends on the viewing position. Thus, the spatially blended aerial face image is perceived to look in a fixed orientation even if the viewing position is changed within a certain viewing range. We analyze the spatial blending system optical design and show results from our prototype display.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140188501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}