In the quest for more compact and efficient augmented reality (AR) displays, the standard approach often necessitates the use of multiple layers to facilitate a large full-color field of view (FoV). Here, we delve into the constraints of FoV in single-layer, full-color waveguide-based AR displays, uncovering the critical roles played by the waveguide's refractive index, the exit pupil expansion (EPE) scheme, and the combiner's angular response in dictating these limitations. Through detailed analysis, we introduce an innovative approach, featuring an optimized butterfly EPE scheme coupled with gradient-pitch polarization volume gratings (PVGs). This novel configuration successfully achieves a theoretical diagonal FoV of 54.06° while maintaining a 16:10 aspect ratio.
{"title":"Full-color, wide field-of-view single-layer waveguide for augmented reality displays","authors":"Qian Yang, Yuqiang Ding, Shin-Tson Wu","doi":"10.1002/jsid.1288","DOIUrl":"10.1002/jsid.1288","url":null,"abstract":"<p>In the quest for more compact and efficient augmented reality (AR) displays, the standard approach often necessitates the use of multiple layers to facilitate a large full-color field of view (FoV). Here, we delve into the constraints of FoV in single-layer, full-color waveguide-based AR displays, uncovering the critical roles played by the waveguide's refractive index, the exit pupil expansion (EPE) scheme, and the combiner's angular response in dictating these limitations. Through detailed analysis, we introduce an innovative approach, featuring an optimized butterfly EPE scheme coupled with gradient-pitch polarization volume gratings (PVGs). This novel configuration successfully achieves a theoretical diagonal FoV of 54.06° while maintaining a 16:10 aspect ratio.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 5","pages":"247-254"},"PeriodicalIF":2.3,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140664142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, a highly efficient photosensitive quantum dot (QD) system was designed. The optimized photosensitive QD system had high photoluminescence quantum yield and colloidal stability. By direct photolithography, RGB pixel arrays with a single sub-pixel size of 39 μm × 5 μm were successfully prepared. Further, the full-color QLED device was realized. There were no residual QD emission peaks from neighboring sub-pixels observed in the electroluminescence spectra. Experience on the full-color QLED device guided the successful preparation of a 4.7-inch 650 PPI active matrix QLED prototype. The active matrix QLED prototype could display clear and complete pictures. The color gamut reached 85% of the BT2020 standard. This is the first active matrix QLED prototype prepared with a record-high resolution by direct photolithography, which promoted the development of QLED display technology.
{"title":"A 4.7-inch 650 PPI AMQLED display prepared by direct photolithography","authors":"Di Zhang, Zhuo Li, Shaoyong Lu, Dong Li, Zhuo Chen, Yanzhao Li, Xinguo Li, Xiaoguang Xu","doi":"10.1002/jsid.1281","DOIUrl":"10.1002/jsid.1281","url":null,"abstract":"<p>In this work, a highly efficient photosensitive quantum dot (QD) system was designed. The optimized photosensitive QD system had high photoluminescence quantum yield and colloidal stability. By direct photolithography, RGB pixel arrays with a single sub-pixel size of 39 μm × 5 μm were successfully prepared. Further, the full-color QLED device was realized. There were no residual QD emission peaks from neighboring sub-pixels observed in the electroluminescence spectra. Experience on the full-color QLED device guided the successful preparation of a 4.7-inch 650 PPI active matrix QLED prototype. The active matrix QLED prototype could display clear and complete pictures. The color gamut reached 85% of the BT2020 standard. This is the first active matrix QLED prototype prepared with a record-high resolution by direct photolithography, which promoted the development of QLED display technology.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 5","pages":"174-183"},"PeriodicalIF":2.3,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140662220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Liu, Yuqing Qiu, Jiaqi Dong, Bo-Ru Yang, Zong Qin
A crucial requirement of augmented reality head-up displays (AR-HUDs) is continuously adjustable virtual image distance (VID), which allows adaptation to various depths in road environments and thereby avoids visual fatigue. However, usual varifocal components for near-eye displays are unavailable because AR-HUDs require the varifocal component's aperture to be larger than 10 cm. This study considers the Alvarez lenses, which change the optical power by in-plane sliding two freeform lenses. Under the paraxial assumption, classic Alvarez lenses can create a quadratic wavefront profile, but the large aperture and extensive diopter variation range required by AR-HUDs lead to significant aberrations. Thus, the classic paraxial Alvarez lens design is extended by co-optimizing Alvarez lenses with high-order surface profiles and a primary freeform mirror. Therefore, a novel varifocal AR-HUD containing Alvarez lenses with apertures larger than 15 cm is proposed. The AR-HUD generates a varifocal plane whose VID can be continuously adjusted between 2.5 and 7.5 m, and another focal plane with a fixed VID at 7.5 m. In addition, merely one display panel is used for compactness. Finally, an AR-HUD prototype with a reduced volume of 9.8 L was built. The expected varifocal performance and qualified imaging quality were experimentally verified through the field of view, VID, and image sharpness.
{"title":"A varifocal augmented reality head-up display using Alvarez freeform lenses","authors":"Yi Liu, Yuqing Qiu, Jiaqi Dong, Bo-Ru Yang, Zong Qin","doi":"10.1002/jsid.1286","DOIUrl":"10.1002/jsid.1286","url":null,"abstract":"<p>A crucial requirement of augmented reality head-up displays (AR-HUDs) is continuously adjustable virtual image distance (VID), which allows adaptation to various depths in road environments and thereby avoids visual fatigue. However, usual varifocal components for near-eye displays are unavailable because AR-HUDs require the varifocal component's aperture to be larger than 10 cm. This study considers the Alvarez lenses, which change the optical power by in-plane sliding two freeform lenses. Under the paraxial assumption, classic Alvarez lenses can create a quadratic wavefront profile, but the large aperture and extensive diopter variation range required by AR-HUDs lead to significant aberrations. Thus, the classic paraxial Alvarez lens design is extended by co-optimizing Alvarez lenses with high-order surface profiles and a primary freeform mirror. Therefore, a novel varifocal AR-HUD containing Alvarez lenses with apertures larger than 15 cm is proposed. The AR-HUD generates a varifocal plane whose VID can be continuously adjusted between 2.5 and 7.5 m, and another focal plane with a fixed VID at 7.5 m. In addition, merely one display panel is used for compactness. Finally, an AR-HUD prototype with a reduced volume of 9.8 L was built. The expected varifocal performance and qualified imaging quality were experimentally verified through the field of view, VID, and image sharpness.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 5","pages":"226-236"},"PeriodicalIF":2.3,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140662638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Perceptually natural standard-dynamic-range (SDR) images reproduced under normal viewing conditions should retain enough information for the human observer to estimate the time at which the actual high-dynamic-range (HDR) scene was captured without recourse to artificial information. Currently, both global- and local-tone mapping operators (TMOs) seem to have comparable levels of performance. Therefore, we first consider the constraints created in the actual human visual system by eye movement, and buttress a hypothesis with a demonstration. We briefly review the imperceptible illuminance effects yielded by the personal circadian clock suggested by chronophysiological research and other related effects, because our previous study suggested that the characteristics of the human visual system dynamically varies depending on the individual's circadian pattern. Finally, we conduct two psychophysical experiments based on the hypothesis that the human visual system employs several global TMOs at the first stage for information compression that depend on individual-circadian-visual-features (ICVF). The results suggest that (1) no participant can perceive actual-capture-time (ACT) and (2) sensitive observers can discriminate reproduced images based on virtual-shooting-time (VST) effects induced by different types of global TMOs. We also discover that the VST-based discrimination differs widely among people, but most are unaware of this effect as evidenced by daily conversations.
{"title":"Next generation personalized display systems employing adaptive dynamic-range compression techniques to address diversity in individual circadian visual features","authors":"Sakuichi Ohtsuka, Saki Iwaida, Yuichiro Orita, Shoko Hira, Masayuki Kashima","doi":"10.1002/jsid.1277","DOIUrl":"10.1002/jsid.1277","url":null,"abstract":"<p>Perceptually natural standard-dynamic-range (SDR) images reproduced under normal viewing conditions should retain enough information for the human observer to estimate the time at which the actual high-dynamic-range (HDR) scene was captured without recourse to artificial information. Currently, both global- and local-tone mapping operators (TMOs) seem to have comparable levels of performance. Therefore, we first consider the constraints created in the actual human visual system by eye movement, and buttress a hypothesis with a demonstration. We briefly review the imperceptible illuminance effects yielded by the personal circadian clock suggested by chronophysiological research and other related effects, because our previous study suggested that the characteristics of the human visual system dynamically varies depending on the individual's circadian pattern. Finally, we conduct two psychophysical experiments based on the hypothesis that the human visual system employs several global TMOs at the first stage for information compression that depend on individual-circadian-visual-features (ICVF). The results suggest that (1) no participant can perceive actual-capture-time (ACT) and (2) sensitive observers can discriminate reproduced images based on virtual-shooting-time (VST) effects induced by different types of global TMOs. We also discover that the VST-based discrimination differs widely among people, but most are unaware of this effect as evidenced by daily conversations.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 6","pages":"462-483"},"PeriodicalIF":2.3,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jsid.1277","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140694508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wang Xinyu, Lu Zhiyong, Kuang Guofeng, Tang Guofu, Liu Chao, Zhang Qinquan, Lian Qiaozhen, Huang Xuerun
We develop a measurement and evaluation system to quantify the halo effect of mini-light-emitting diode (LED) backlight liquid crystal displays (mLCDs). The validity and reliability of our halo measurement system was investigated through a human visual perception experiment. The results indicate that our halo measurement system can effectively distinguish the halo differences among different displays, with matching rate of 93.3% between our measurement and the human visual system.
{"title":"Halo effect measurement for mini-light-emitting diode backlight liquid crystal displays","authors":"Wang Xinyu, Lu Zhiyong, Kuang Guofeng, Tang Guofu, Liu Chao, Zhang Qinquan, Lian Qiaozhen, Huang Xuerun","doi":"10.1002/jsid.1278","DOIUrl":"https://doi.org/10.1002/jsid.1278","url":null,"abstract":"<p>We develop a measurement and evaluation system to quantify the halo effect of mini-light-emitting diode (LED) backlight liquid crystal displays (mLCDs). The validity and reliability of our halo measurement system was investigated through a human visual perception experiment. The results indicate that our halo measurement system can effectively distinguish the halo differences among different displays, with matching rate of 93.3% between our measurement and the human visual system.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 4","pages":"136-148"},"PeriodicalIF":2.3,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140559588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ya-Hsiang Tai, Yi-Cheng Yuan, Chih-Yang Chen, Te-Yu Lee
In this paper, we present a novel approach to counter the influence of ambient light on photodetectors used in applications like biometric recognition and environmental sensing. The proposed solution introduces a circuit-based technique that utilizes signal differencing to subtract ambient light signals before they reach the integrated circuit (IC). The process involves row and column differential signals, akin to analog circuit differential amplifiers. Simulations validate the circuit's functionality, showing its effectiveness in reducing ambient light impact. However, image reconstruction after differencing introduces blurriness due to the accumulation of noise. An alternative bidirectional fusion method is suggested, resulting in a clearer representation of features without noise accumulation. This innovative approach promises to enhance photodetector performance in challenging lighting conditions for various applications.
{"title":"In-panel ambient light eliminating differential circuit applied to active pixel fingerprint sensor","authors":"Ya-Hsiang Tai, Yi-Cheng Yuan, Chih-Yang Chen, Te-Yu Lee","doi":"10.1002/jsid.1279","DOIUrl":"https://doi.org/10.1002/jsid.1279","url":null,"abstract":"<p>In this paper, we present a novel approach to counter the influence of ambient light on photodetectors used in applications like biometric recognition and environmental sensing. The proposed solution introduces a circuit-based technique that utilizes signal differencing to subtract ambient light signals before they reach the integrated circuit (IC). The process involves row and column differential signals, akin to analog circuit differential amplifiers. Simulations validate the circuit's functionality, showing its effectiveness in reducing ambient light impact. However, image reconstruction after differencing introduces blurriness due to the accumulation of noise. An alternative bidirectional fusion method is suggested, resulting in a clearer representation of features without noise accumulation. This innovative approach promises to enhance photodetector performance in challenging lighting conditions for various applications.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 4","pages":"149-158"},"PeriodicalIF":2.3,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140559517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Today's display industry faces transistor-level challenges similar to those of complementary metal-oxide semiconductor (CMOS) metal-oxide semiconductor field-effect transistors (MOSFETs) in the mid-1990s. Learnings from MOSFETs inform the display industry's response to the limitations of silicon-based thin-film transistors (TFTs). Improvements sustaining Moore's Law drove the need to rethink MOSFET materials and structures. The display industry needs fundamental innovation at the device level. New thin-film devices enable an inflection point in the use of displays, just as fin field-effect transistor (FinFET) defined the inflection point in CMOS in the 2000s. This paper outlines two innovations in thin-film device technology that offers improvement in image quality and power consumption of flat panel displays: amorphous metal gate TFTs (AMeTFTs) and amorphous metal nonlinear resistors (AMNRs). Linked through a single core material set based on mass-producible, thin-film amorphous metals, these two innovations create near- and long-term roadmaps simplifying the production of high-image quality, low-power consumption displays on glass (now) and plastic (future). In particular, the field-effect mobility of indium gallium zinc oxide (IGZO) AMeTFTs (55–72 cm2/Vs) exceeds that of IGZO TFTs developed by existing display manufacturers without the need for atomic layer deposition or vertical stacking of heterostructure semiconductor films, making AMeTFTs a natural choice for the new G8.5–G8.7 fabs targeting IGZO backplanes.
{"title":"Innovations in thin-film electronics for the new generation of displays","authors":"Andre Zeumault, Jose E. Mendez, John Brewer","doi":"10.1002/jsid.1274","DOIUrl":"10.1002/jsid.1274","url":null,"abstract":"<p>Today's display industry faces transistor-level challenges similar to those of complementary metal-oxide semiconductor (CMOS) metal-oxide semiconductor field-effect transistors (MOSFETs) in the mid-1990s. Learnings from MOSFETs inform the display industry's response to the limitations of silicon-based thin-film transistors (TFTs). Improvements sustaining Moore's Law drove the need to rethink MOSFET materials and structures. The display industry needs fundamental innovation at the device level. New thin-film devices enable an inflection point in the use of displays, just as fin field-effect transistor (FinFET) defined the inflection point in CMOS in the 2000s. This paper outlines two innovations in thin-film device technology that offers improvement in image quality and power consumption of flat panel displays: amorphous metal gate TFTs (AMeTFTs) and amorphous metal nonlinear resistors (AMNRs). Linked through a single core material set based on mass-producible, thin-film amorphous metals, these two innovations create near- and long-term roadmaps simplifying the production of high-image quality, low-power consumption displays on glass (now) and plastic (future). In particular, the field-effect mobility of indium gallium zinc oxide (IGZO) AMeTFTs (55–72 cm<sup>2</sup>/Vs) exceeds that of IGZO TFTs developed by existing display manufacturers without the need for atomic layer deposition or vertical stacking of heterostructure semiconductor films, making AMeTFTs a natural choice for the new G8.5–G8.7 fabs targeting IGZO backplanes.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 4","pages":"121-135"},"PeriodicalIF":2.3,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/jsid.1274","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140382078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a novel aerial display system that reconstructs face orientation. The proposed system forms two face images floating in mid-air. Viewers observe a spatially blended image of the two face images, where the spatial blending ratio depends on the viewing position. Thus, the spatially blended aerial face image is perceived to look in a fixed orientation even if the viewing position is changed within a certain viewing range. We analyze the spatial blending system optical design and show results from our prototype display.
{"title":"Aerial display that reconstructs face orientation by use of spatial blending of two face images","authors":"Kohei Kishinami, Keigo Sato, Masaki Yasugi, Shiro Suyama, Hirotsugu Yamamoto","doi":"10.1002/jsid.1273","DOIUrl":"https://doi.org/10.1002/jsid.1273","url":null,"abstract":"<p>This paper proposes a novel aerial display system that reconstructs face orientation. The proposed system forms two face images floating in mid-air. Viewers observe a spatially blended image of the two face images, where the spatial blending ratio depends on the viewing position. Thus, the spatially blended aerial face image is perceived to look in a fixed orientation even if the viewing position is changed within a certain viewing range. We analyze the spatial blending system optical design and show results from our prototype display.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 3","pages":"101-111"},"PeriodicalIF":2.3,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140188501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ren-Jun Choong, Wun-She Yap, Yan Chai Hum, Khin Wee Lai, Lloyd Ling, Anthony Vodacek, Yee Kai Tee
The visual microphone is a technique for remote sound recovery that extracts sound information from tiny pixel-scale vibrations in a video. Despite having demonstrated success in sound recovery, the impact of various visual enhancement and color conversion algorithms applied on the video before the sound recovery process has not been explored. Thus, it is important to investigate these effects have on the recovered sound quality, as the vibrations are so small the effects play an important role. This work experimented with different color to grayscale conversions and visual enhancement algorithms on 576 videos, and found that the recovered sound quality is indeed greatly affected by the choice of algorithms. The best conversion algorithms were found to be the average of the red, green and blue color channels and the perceptual lightness in the CIELAB color space, improving the recovered sound quality by up to 23.22%. Furthermore, visual enhancement techniques such as gamma correction have been found to corrupt vibration information, leading to a 22.47% drop in recovered sound quality in one of the tested videos. Therefore, it is advisable to avoid or minimize the use of visual enhancement techniques for remote sound recovery to prevent the elimination of useful subtle vibrations.
{"title":"Impact of visual enhancement and color conversion algorithms on remote sound recovery from silent videos","authors":"Ren-Jun Choong, Wun-She Yap, Yan Chai Hum, Khin Wee Lai, Lloyd Ling, Anthony Vodacek, Yee Kai Tee","doi":"10.1002/jsid.1275","DOIUrl":"https://doi.org/10.1002/jsid.1275","url":null,"abstract":"<p>The visual microphone is a technique for remote sound recovery that extracts sound information from tiny pixel-scale vibrations in a video. Despite having demonstrated success in sound recovery, the impact of various visual enhancement and color conversion algorithms applied on the video before the sound recovery process has not been explored. Thus, it is important to investigate these effects have on the recovered sound quality, as the vibrations are so small the effects play an important role. This work experimented with different color to grayscale conversions and visual enhancement algorithms on 576 videos, and found that the recovered sound quality is indeed greatly affected by the choice of algorithms. The best conversion algorithms were found to be the average of the red, green and blue color channels and the perceptual lightness in the CIELAB color space, improving the recovered sound quality by up to 23.22%. Furthermore, visual enhancement techniques such as gamma correction have been found to corrupt vibration information, leading to a 22.47% drop in recovered sound quality in one of the tested videos. Therefore, it is advisable to avoid or minimize the use of visual enhancement techniques for remote sound recovery to prevent the elimination of useful subtle vibrations.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 3","pages":"112-125"},"PeriodicalIF":2.3,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140188493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a dynamic information fusion interactive system with a transparent display applied in the vehicle. The dynamic information fusion interactive system integrates three leading technologies: relative position acquisition, fusion information mapping, and visual field tracking adaptive information generation. For visual field tracking adaptive information generation, the position of fusion information changes as the observer's field of view varies to help observers use this system more comfortably and intuitively. The frame rate of the dynamic virtual information display can be increased from 10 to 20 Hz without increasing the AI computing power and providing observers with a better observer experience. Integrating those technologies could achieve an AR-based interactive system on a direct-view transparent display. Such a system can provide passengers with location-based interactive virtual and real integration information in the mobile field.
本文提出了一种应用于车载透明显示屏的动态信息融合交互系统。该动态信息融合交互系统集成了三项领先技术:相对位置采集、融合信息映射和视场跟踪自适应信息生成。在视场跟踪自适应信息生成方面,融合信息的位置会随着观察者视野的变化而变化,以帮助观察者更舒适、更直观地使用该系统。在不增加人工智能计算能力的情况下,动态虚拟信息显示的帧速率可从 10 Hz 提高到 20 Hz,从而为观察者提供更好的观察体验。整合这些技术可以在直视透明显示屏上实现基于 AR 的交互系统。这样的系统可以在移动领域为乘客提供基于位置的交互式虚拟和真实集成信息。
{"title":"Fusion information refresh rate improvement based on adaptive visual tracking in vehicle augmented reality sightseeing interactive system","authors":"Yu-Hsiang Tsai, Hong-Ming Dai, Yung-Jhe Yan, Mang Ou-Yang","doi":"10.1002/jsid.1276","DOIUrl":"10.1002/jsid.1276","url":null,"abstract":"<p>This paper proposes a dynamic information fusion interactive system with a transparent display applied in the vehicle. The dynamic information fusion interactive system integrates three leading technologies: relative position acquisition, fusion information mapping, and visual field tracking adaptive information generation. For visual field tracking adaptive information generation, the position of fusion information changes as the observer's field of view varies to help observers use this system more comfortably and intuitively. The frame rate of the dynamic virtual information display can be increased from 10 to 20 Hz without increasing the AI computing power and providing observers with a better observer experience. Integrating those technologies could achieve an AR-based interactive system on a direct-view transparent display. Such a system can provide passengers with location-based interactive virtual and real integration information in the mobile field.</p>","PeriodicalId":49979,"journal":{"name":"Journal of the Society for Information Display","volume":"32 8","pages":"545-554"},"PeriodicalIF":1.7,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140255075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}