首页 > 最新文献

Displays最新文献

英文 中文
Field curvature analysis and optimization method for near-eye display systems 近眼显示系统的视场曲率分析和优化方法
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-15 DOI: 10.1016/j.displa.2024.102777
Da Wang , Dewen Cheng , Cheng Yao , Qiwei Wang

Near-eye display (NED) systems with a large field of view (FOV) and a large pupil are often accompanied by field curvature. In such cases, there is often a significant deviation between the virtual image seen by the human eye and an ideal plane. Currently, there is a lack of precise methods for describing and controlling the shape of virtual images in visual space. In this paper, the system is modeled, and the curvature is controlled through optimization. Under limited conditions, the system’s field curvature is controlled to approach an ideal state. When the system’s field curvature cannot be completely corrected, a method for describing the virtual image surface is introduced. This method helps optical designers effectively predict the degree of curvature and the distance of the virtual image. The image quality of the system with a curved image is evaluated and optimized based on the focusing function of the human eye. Additionally, the field curvature and pupil swim are analyzed jointly. The effectiveness of the method is verified by designing two different types of NED systems.

具有大视野(FOV)和大瞳孔的近眼显示(NED)系统往往伴随着视场弯曲。在这种情况下,人眼看到的虚拟图像与理想平面之间往往存在很大偏差。目前,还缺乏描述和控制视觉空间中虚拟图像形状的精确方法。本文对系统进行建模,并通过优化控制曲率。在有限条件下,系统的场曲率被控制为接近理想状态。当系统的视场曲率无法完全校正时,本文引入了一种描述虚拟图像表面的方法。这种方法可以帮助光学设计人员有效地预测虚拟图像的曲率程度和距离。根据人眼的聚焦功能,对具有弯曲图像的系统的图像质量进行了评估和优化。此外,还对视野曲率和瞳孔游动进行了联合分析。通过设计两种不同类型的 NED 系统,验证了该方法的有效性。
{"title":"Field curvature analysis and optimization method for near-eye display systems","authors":"Da Wang ,&nbsp;Dewen Cheng ,&nbsp;Cheng Yao ,&nbsp;Qiwei Wang","doi":"10.1016/j.displa.2024.102777","DOIUrl":"10.1016/j.displa.2024.102777","url":null,"abstract":"<div><p>Near-eye display (NED) systems with a large field of view (FOV) and a large pupil are often accompanied by field curvature. In such cases, there is often a significant deviation between the virtual image seen by the human eye and an ideal plane. Currently, there is a lack of precise methods for describing and controlling the shape of virtual images in visual space. In this paper, the system is modeled, and the curvature is controlled through optimization. Under limited conditions, the system’s field curvature is controlled to approach an ideal state. When the system’s field curvature cannot be completely corrected, a method for describing the virtual image surface is introduced. This method helps optical designers effectively predict the degree of curvature and the distance of the virtual image. The image quality of the system with a curved image is evaluated and optimized based on the focusing function of the human eye. Additionally, the field curvature and pupil swim are analyzed jointly. The effectiveness of the method is verified by designing two different types of NED systems.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102777"},"PeriodicalIF":3.7,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141396367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physiological and performance metrics during a cardiopulmonary real-time feedback simulation to estimate cognitive load 模拟心肺功能实时反馈过程中的生理和性能指标,以估计认知负荷
IF 4.3 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-15 DOI: 10.1016/j.displa.2024.102780
Blanca Larraga-García , Verónica Ruiz Bejerano , Xabier Oregui , Javier Rubio-Bolívar , Manuel Quintana-Díaz , Álvaro Gutiérrez

Multitasking is crucial for First Responders (FRs) in emergency scenarios, enabling them to prioritize and treat victims efficiently. However, research on multitasking and its impact on rescue operations are limited. This study explores the relationship between multitasking, working memory, and the performance of chest compressions during cardiopulmonary resuscitation (CPR). In this experiment, eighteen first-year residents participated in a CPR maneuver using a real-time feedback simulator to learn chest compressions. Different additional secondary tasks were developed and accomplished concurrently with the chest compressions. Heart rate, respiration rate, galvanic skin response, body temperature, eye gaze movements and chest compression performance data were collected. The findings of this study indicated that multitasking impacted chest compression quality for all secondary tasks, showing significance (p-value < 0.05) for the frequency of the chest compressions which worsened in all cases. Additionally, vital signs such as heart rate, respiration rate, and eye gaze speed were also affected during multitasking. Nevertheless, this change on vital signs was different depending on the type of secondary task accomplished. Therefore, as a conclusion, performing multiple tasks during chest compressions affects performance. Understanding cognitive load and its impact on vital signs can aid in training FRs to handle complex scenarios efficiently.

在紧急情况下,多任务处理对急救人员(FRs)至关重要,它能使急救人员分清轻重缓急并有效地救治受害者。然而,有关多任务处理及其对救援行动影响的研究却十分有限。本研究探讨了多任务处理、工作记忆和心肺复苏(CPR)过程中胸外按压表现之间的关系。在这项实验中,18 名一年级住院医师使用实时反馈模拟器参与了心肺复苏操作,学习胸外按压。在胸外按压的同时,还开发并完成了不同的辅助任务。研究人员收集了心率、呼吸频率、皮肤电反应、体温、眼球注视运动和胸外按压表现数据。研究结果表明,多任务处理影响了所有次要任务的胸外按压质量,在所有情况下,胸外按压的频率都有所下降,显示出显著性(p 值为 0.05)。此外,心率、呼吸频率和眼睛注视速度等生命体征也在多任务期间受到影响。不过,生命体征的变化因完成的次要任务类型而异。因此,作为结论,在胸外按压过程中执行多项任务会影响工作表现。了解认知负荷及其对生命体征的影响有助于训练前线急救人员有效处理复杂的情况。
{"title":"Physiological and performance metrics during a cardiopulmonary real-time feedback simulation to estimate cognitive load","authors":"Blanca Larraga-García ,&nbsp;Verónica Ruiz Bejerano ,&nbsp;Xabier Oregui ,&nbsp;Javier Rubio-Bolívar ,&nbsp;Manuel Quintana-Díaz ,&nbsp;Álvaro Gutiérrez","doi":"10.1016/j.displa.2024.102780","DOIUrl":"10.1016/j.displa.2024.102780","url":null,"abstract":"<div><p>Multitasking is crucial for First Responders (FRs) in emergency scenarios, enabling them to prioritize and treat victims efficiently. However, research on multitasking and its impact on rescue operations are limited. This study explores the relationship between multitasking, working memory, and the performance of chest compressions during cardiopulmonary resuscitation (CPR). In this experiment, eighteen first-year residents participated in a CPR maneuver using a real-time feedback simulator to learn chest compressions. Different additional secondary tasks were developed and accomplished concurrently with the chest compressions. Heart rate, respiration rate, galvanic skin response, body temperature, eye gaze movements and chest compression performance data were collected. The findings of this study indicated that multitasking impacted chest compression quality for all secondary tasks, showing significance (p-value &lt; 0.05) for the frequency of the chest compressions which worsened in all cases. Additionally, vital signs such as heart rate, respiration rate, and eye gaze speed were also affected during multitasking. Nevertheless, this change on vital signs was different depending on the type of secondary task accomplished. Therefore, as a conclusion, performing multiple tasks during chest compressions affects performance. Understanding cognitive load and its impact on vital signs can aid in training FRs to handle complex scenarios efficiently.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102780"},"PeriodicalIF":4.3,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0141938224001446/pdfft?md5=c5a42ab2f90b291bf7b17e4dc7e7121e&pid=1-s2.0-S0141938224001446-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141392364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GRLN: Gait Refined Lateral Network for gait recognition GRLN:用于步态识别的步态精化侧向网络
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-15 DOI: 10.1016/j.displa.2024.102776
Yukun Song , Xin Mao , Xuxiang Feng , Changwei Wang , Rongtao Xu , Man Zhang , Shibiao Xu

Gait recognition aims to identify individuals at a distance based on their biometric gait patterns. While offering flexibility in network input, existing set-based methods often overlook the potential of fine-grained local feature by solely utilizing global gait feature and fail to fully exploit the communication between silhouette-level and set-level features. To alleviate this issue, we propose Gait Refined Lateral Network(GRLN), featuring plug-and-play Adaptive Feature Refinement modules (AFR) that extract discriminative features progressively from silhouette-level and set-level representations in a coarse-to-fine manner at various network depths. AFR can be widely applied in set-based gait recognition models to substantially enhance their gait recognition performance. To align with the extracted refined features, we introduce Horizontal Stable Mapping (HSM), a novel mapping technique that reduces model parameters while improving experimental results. To demonstrate the effectiveness of our method, we evaluate GRLN on two gait datasets, achieving the highest recognition rate among all set-based methods. Specifically, GRLN demonstrates an average improvement of 1.15% over the state-of-the-art set-based method on CASIA-B. Especially in the coat-wearing condition, GRLN exhibits a 5% improvement in performance compared to the contrast method GLN.

步态识别旨在根据生物步态模式远距离识别个人。现有的基于集合的方法虽然提供了网络输入的灵活性,但往往只利用全局步态特征而忽略了细粒度局部特征的潜力,也未能充分利用轮廓级特征与集合级特征之间的通信。为了解决这个问题,我们提出了步态精炼侧向网络(GRLN),它具有即插即用的自适应特征精炼模块(AFR),能在不同的网络深度上以从粗到细的方式逐步从轮廓级和集合级表征中提取辨别特征。AFR 可广泛应用于基于集合的步态识别模型,以大幅提高其步态识别性能。为了与提取的细化特征保持一致,我们引入了水平稳定映射(HSM)技术,这是一种新颖的映射技术,可在改善实验结果的同时减少模型参数。为了证明我们方法的有效性,我们在两个步态数据集上对 GRLN 进行了评估,在所有基于集合的方法中,GRLN 的识别率最高。具体来说,在 CASIA-B 数据集上,GRLN 比最先进的基于集合的方法平均提高了 1.15%。特别是在穿外套的情况下,GRLN 的性能比对比方法 GLN 提高了 5%。
{"title":"GRLN: Gait Refined Lateral Network for gait recognition","authors":"Yukun Song ,&nbsp;Xin Mao ,&nbsp;Xuxiang Feng ,&nbsp;Changwei Wang ,&nbsp;Rongtao Xu ,&nbsp;Man Zhang ,&nbsp;Shibiao Xu","doi":"10.1016/j.displa.2024.102776","DOIUrl":"10.1016/j.displa.2024.102776","url":null,"abstract":"<div><p>Gait recognition aims to identify individuals at a distance based on their biometric gait patterns. While offering flexibility in network input, existing set-based methods often overlook the potential of fine-grained local feature by solely utilizing global gait feature and fail to fully exploit the communication between silhouette-level and set-level features. To alleviate this issue, we propose Gait Refined Lateral Network(GRLN), featuring plug-and-play Adaptive Feature Refinement modules (AFR) that extract discriminative features progressively from silhouette-level and set-level representations in a coarse-to-fine manner at various network depths. AFR can be widely applied in set-based gait recognition models to substantially enhance their gait recognition performance. To align with the extracted refined features, we introduce Horizontal Stable Mapping (HSM), a novel mapping technique that reduces model parameters while improving experimental results. To demonstrate the effectiveness of our method, we evaluate GRLN on two gait datasets, achieving the highest recognition rate among all set-based methods. Specifically, GRLN demonstrates an average improvement of 1.15% over the state-of-the-art set-based method on CASIA-B. Especially in the coat-wearing condition, GRLN exhibits a 5% improvement in performance compared to the contrast method GLN.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102776"},"PeriodicalIF":3.7,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141397488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blind quality assessment of night-time photos: A region selective approach 夜间照片的盲目质量评估:区域选择法
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-13 DOI: 10.1016/j.displa.2024.102774
Zongxi Han, Rong Xie

Despite the emergence of low-light enhancement algorithms and the associated quality assessment metrics in literature, there are rare works considering the quality assessment of real night-time photos captured by mobile cameras. In this paper, we handle this task by first constructing a night-time photo database (NPHD), which consists of 510 photos captured by 30 mobile devices in 17 scenes. Their mean opinion scores are rated by 10 people using the anchor ruler method. Furthermore, we propose a region selective approach for the objective image quality assessment (RSIQA), based on which different feature sets are extracted. Specifically, the center and around regions are partitioned for the brightness, contrast, vignetting, saturation and shading. The brightest areas are located as the region where the highlight suppressing capability is qualified. Finally, we select the foreground and sharpest regions for the assessment of preserving details, naturalness, noises, and image structure. To map different/multiple quality attributes of the night-time photo into a single quality score, four regressors: support vector regression, decision tree, random forest or AdaBoost.R2 are chosen and compared. Experiments on NPHD demonstrate that the proposed RSIQA achieves superior result compared to 17 state-of-the-art, 4 types of quality metrics, including conventionally general-purpose, deep learning based, contrast oriented and night specific ones.

尽管低照度增强算法和相关质量评估指标在文献中不断涌现,但很少有作品考虑对移动相机拍摄的真实夜间照片进行质量评估。在本文中,我们首先构建了一个夜间照片数据库(NPHD),该数据库由 30 台移动设备在 17 个场景中拍摄的 510 张照片组成。它们的平均意见分数由 10 个人使用锚标尺法进行评分。此外,我们还提出了一种用于客观图像质量评估(RSIQA)的区域选择方法,并在此基础上提取了不同的特征集。具体来说,对中心和周围区域进行亮度、对比度、晕轮、饱和度和阴影划分。最亮的区域被定位为高光抑制能力合格的区域。最后,我们选择前景和最清晰的区域来评估细节、自然度、噪点和图像结构。为了将夜景照片的不同/多重质量属性映射为单一质量得分,我们选择了支持向量回归、决策树、随机森林或 AdaBoost.R2 四种回归器并进行了比较。在 NPHD 上进行的实验表明,与 17 种最先进的 4 类质量指标(包括传统的通用指标、基于深度学习的质量指标、面向对比度的质量指标和针对夜间的质量指标)相比,所提出的 RSIQA 取得了更优异的结果。
{"title":"Blind quality assessment of night-time photos: A region selective approach","authors":"Zongxi Han,&nbsp;Rong Xie","doi":"10.1016/j.displa.2024.102774","DOIUrl":"10.1016/j.displa.2024.102774","url":null,"abstract":"<div><p>Despite the emergence of low-light enhancement algorithms and the associated quality assessment metrics in literature, there are rare works considering the quality assessment of real night-time photos captured by mobile cameras. In this paper, we handle this task by first constructing a night-time photo database (NPHD), which consists of 510 photos captured by 30 mobile devices in 17 scenes. Their mean opinion scores are rated by 10 people using the anchor ruler method. Furthermore, we propose a region selective approach for the objective image quality assessment (RSIQA), based on which different feature sets are extracted. Specifically, the center and around regions are partitioned for the brightness, contrast, vignetting, saturation and shading. The brightest areas are located as the region where the highlight suppressing capability is qualified. Finally, we select the foreground and sharpest regions for the assessment of preserving details, naturalness, noises, and image structure. To map different/multiple quality attributes of the night-time photo into a single quality score, four regressors: support vector regression, decision tree, random forest or AdaBoost.R2 are chosen and compared. Experiments on NPHD demonstrate that the proposed RSIQA achieves superior result compared to 17 state-of-the-art, 4 types of quality metrics, including conventionally general-purpose, deep learning based, contrast oriented and night specific ones.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102774"},"PeriodicalIF":3.7,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141395412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Projection helps to improve visual impact: On a dark or foggy day 投影有助于提高视觉效果:在阴天或雾天
IF 4.3 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-12 DOI: 10.1016/j.displa.2024.102769
Yan Mao , Xuan Wang , Wu He , Gaofeng Pan

Driving is a highly visually demanding activity. Different driving conditions have different effects on drivers, so to understand the effects of driving vision on drivers, this paper directly investigates the role of central and peripheral vision in different scenarios and tests whether projection training improves driving behavior. We use a VR device to selectively present information in the central and peripheral parts of the field of view to achieve these goals. In Experiment 1, we compare drivers’ performance with and without experience when driving through four different visual conditions under dark and foggy skies. Participants’ visual search behavior and driving behavior activities were recorded simultaneously. Experiment 2 determined whether training with a circular projection of three colors improved the driver’s behavior. The results showed that (1) central vision is critical to the driver, and the importance of peripheral vision can be directly measured using the VR device; (2) a clear middle and blurred peripheral vision not only improves driver behavior in foggy weather but also helps to improve attention and driving ability; and (3) the color projection training indicated that the green projection was more effective than the others and that it significantly improved (4) Novice drivers collected visual information mainly from their central vision and were less able to drive than veterans, but green projection improved their driving ability and reduced collisions. Most importantly, the study results provide a new visual training paradigm that can improve driver behavior on dark and foggy days, especially for female novices.

驾驶是一项对视觉要求极高的活动。不同的驾驶条件会对驾驶员产生不同的影响,因此为了了解驾驶视觉对驾驶员的影响,本文直接研究了不同场景下中央和周边视觉的作用,并测试了投影训练是否能改善驾驶行为。为了实现上述目标,我们使用 VR 设备选择性地在视野的中心和周边部分呈现信息。在实验 1 中,我们比较了驾驶员在黑暗和多雾的天空下通过四种不同的视觉条件驾驶时,有经验和没有经验的驾驶员的表现。参与者的视觉搜索行为和驾驶行为活动被同时记录下来。实验 2 确定了使用三种颜色的圆形投影进行训练是否会改善驾驶员的行为。结果表明:(1) 中心视力对驾驶员至关重要,而外围视力的重要性可通过 VR 设备直接测量;(2) 清晰的中间视力和模糊的外围视力不仅能改善雾天驾驶员的行为,还有助于提高注意力和驾驶能力;(3) 颜色投影训练表明,绿色投影比其他投影更有效,而且能显著改善(4) 新手驾驶员主要通过中心视力收集视觉信息,驾驶能力不如老手,但绿色投影能提高他们的驾驶能力,减少碰撞事故。最重要的是,研究结果提供了一种新的视觉训练范式,可以改善司机在阴暗和雾天的驾驶行为,尤其是对女性新手而言。
{"title":"Projection helps to improve visual impact: On a dark or foggy day","authors":"Yan Mao ,&nbsp;Xuan Wang ,&nbsp;Wu He ,&nbsp;Gaofeng Pan","doi":"10.1016/j.displa.2024.102769","DOIUrl":"10.1016/j.displa.2024.102769","url":null,"abstract":"<div><p>Driving is a highly visually demanding activity. Different driving conditions have different effects on drivers, so to understand the effects of driving vision on drivers, this paper directly investigates the role of central and peripheral vision in different scenarios and tests whether projection training improves driving behavior. We use a VR device to selectively present information in the central and peripheral parts of the field of view to achieve these goals. In Experiment 1, we compare drivers’ performance with and without experience when driving through four different visual conditions under dark and foggy skies. Participants’ visual search behavior and driving behavior activities were recorded simultaneously. Experiment 2 determined whether training with a circular projection of three colors improved the driver’s behavior. The results showed that (1) central vision is critical to the driver, and the importance of peripheral vision can be directly measured using the VR device; (2) a clear middle and blurred peripheral vision not only improves driver behavior in foggy weather but also helps to improve attention and driving ability; and (3) the color projection training indicated that the green projection was more effective than the others and that it significantly improved (4) Novice drivers collected visual information mainly from their central vision and were less able to drive than veterans, but green projection improved their driving ability and reduced collisions. Most importantly, the study results provide a new visual training paradigm that can improve driver behavior on dark and foggy days, especially for female novices.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102769"},"PeriodicalIF":4.3,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141409839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual-bootstrapping gate driver circuit design using IGZO TFTs 使用 IGZO TFT 的双引导栅极驱动器电路设计
IF 4.3 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-12 DOI: 10.1016/j.displa.2024.102772
Congwei Liao , Xin Zheng , Shengdong Zhang

To promote the integration of thin-film transistor (TFT) gate driver circuit technology into high-resolution large-size display application with narrow bezel, achieving high speed is a critical challenge. This paper proposed a dual-bootstrapping TFT integrated gate driver circuit for large-size display. The over-drive voltage of the driving TFT was increased both at the rising and falling edges of the output waveforms. To validate the circuit feasibility, the proposed circuit was fabricated using amorphous indium-gallium-zinc-oxide (a-IGZO) TFT technology and measured in terms of transient response with cascaded stages and reliability tests over long operating time. Compared to conventional approaches, the proposed gate driver demonstrates a 39 % reduction in the falling time as well as compact layout. Therefore, the proposed gate driver schematic is well-suited for large-size display applications that involves heavy resistance–capacitance (RC) loadings and require high resolution above 8 K.

为促进薄膜晶体管(TFT)栅极驱动电路技术与窄边框高分辨率大尺寸显示器应用的集成,实现高速是一项关键挑战。本文提出了一种用于大尺寸显示器的双启动 TFT 集成栅极驱动电路。驱动 TFT 的过驱动电压在输出波形的上升沿和下降沿均有所提高。为了验证电路的可行性,使用非晶铟镓锌氧化物(a-IGZO)TFT 技术制造了所提出的电路,并测量了级联级的瞬态响应和长时间工作的可靠性测试。与传统方法相比,拟议的栅极驱动器缩短了 39% 的下降时间,而且布局紧凑。因此,所提出的栅极驱动器原理图非常适合涉及重电阻电容 (RC) 负载并要求 8 K 以上高分辨率的大尺寸显示应用。
{"title":"Dual-bootstrapping gate driver circuit design using IGZO TFTs","authors":"Congwei Liao ,&nbsp;Xin Zheng ,&nbsp;Shengdong Zhang","doi":"10.1016/j.displa.2024.102772","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102772","url":null,"abstract":"<div><p>To promote the integration of thin-film transistor (TFT) gate driver circuit technology into high-resolution large-size display application with narrow bezel, achieving high speed is a critical challenge. This paper proposed a dual-bootstrapping TFT integrated gate driver circuit for large-size display. The over-drive voltage of the driving TFT was increased both at the rising and falling edges of the output waveforms. To validate the circuit feasibility, the proposed circuit was fabricated using amorphous indium-gallium-zinc-oxide (a-IGZO) TFT technology and measured in terms of transient response with cascaded stages and reliability tests over long operating time. Compared to conventional approaches, the proposed gate driver demonstrates a 39 % reduction in the falling time as well as compact layout. Therefore, the proposed gate driver schematic is well-suited for large-size display applications that involves heavy resistance–capacitance (RC) loadings and require high resolution above 8 K.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102772"},"PeriodicalIF":4.3,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141323315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Objectively assessing visual analogue scale of knee osteoarthritis pain using thermal imaging 利用热成像技术客观评估膝关节骨性关节炎疼痛的视觉模拟量表
IF 4.3 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-08 DOI: 10.1016/j.displa.2024.102770
Bitao Ma , Jiajie Chen , Xiaoxiao Yan , Zhanzhan Cheng , Nengfeng Qian , Changyin Wu , Wendell Q. Sun

Knee osteoarthritis (KOA) is a common degenerative joint disorder that significantly deteriorates the quality of life for affected patients, primarily through the symptom of knee pain. In this study, we developed a machine learning methodology that integrates infrared thermographic technology with health data to objectively evaluate the Visual Analogue Scale (VAS) scores for knee pain in patients suffering from KOA. We preprocessed thermographic data from two healthcare centers by removing background noise and extracting Regions of Interest (ROI), which allowed us to capture image features. These were then merged with patient health data to build a comprehensive feature set. We employed various regression models to predict the VAS scores. The results indicate that the XGBoost model, using a 7:3 training-to-testing ratio, outperformed other models across several evaluation metrics. This study confirms the practicality and effectiveness of using thermographic imaging and machine learning for assessing knee pain, providing a new supportive tool for the management of pain in KOA and potentially increasing the objectivity of clinical assessments. The research is primarily focused on the middle-aged and elderly populations. In the future, we plan to extend the use of this technology to monitor risk factors in children’s knees, with the goal of improving their long-term quality of life and enhancing the overall well-being of the population.

膝关节骨关节炎(KOA)是一种常见的退行性关节疾病,主要通过膝关节疼痛症状严重影响患者的生活质量。在这项研究中,我们开发了一种将红外热成像技术与健康数据相结合的机器学习方法,用于客观评估 KOA 患者膝关节疼痛的视觉模拟量表(VAS)评分。我们通过去除背景噪声和提取感兴趣区(ROI)对两个医疗中心的热成像数据进行了预处理,从而捕捉到了图像特征。然后将这些特征与患者健康数据合并,建立一个综合特征集。我们采用了各种回归模型来预测 VAS 分数。结果表明,XGBoost 模型的训练与测试比例为 7:3,在多个评估指标上都优于其他模型。这项研究证实了使用热成像和机器学习评估膝关节疼痛的实用性和有效性,为 KOA 疼痛管理提供了一种新的辅助工具,并有可能提高临床评估的客观性。这项研究主要针对中老年人群。未来,我们计划将这项技术的使用范围扩大到监测儿童膝关节的风险因素,目的是改善他们的长期生活质量,提高人群的整体健康水平。
{"title":"Objectively assessing visual analogue scale of knee osteoarthritis pain using thermal imaging","authors":"Bitao Ma ,&nbsp;Jiajie Chen ,&nbsp;Xiaoxiao Yan ,&nbsp;Zhanzhan Cheng ,&nbsp;Nengfeng Qian ,&nbsp;Changyin Wu ,&nbsp;Wendell Q. Sun","doi":"10.1016/j.displa.2024.102770","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102770","url":null,"abstract":"<div><p>Knee osteoarthritis (KOA) is a common degenerative joint disorder that significantly deteriorates the quality of life for affected patients, primarily through the symptom of knee pain. In this study, we developed a machine learning methodology that integrates infrared thermographic technology with health data to objectively evaluate the Visual Analogue Scale (VAS) scores for knee pain in patients suffering from KOA. We preprocessed thermographic data from two healthcare centers by removing background noise and extracting Regions of Interest (ROI), which allowed us to capture image features. These were then merged with patient health data to build a comprehensive feature set. We employed various regression models to predict the VAS scores. The results indicate that the XGBoost model, using a 7:3 training-to-testing ratio, outperformed other models across several evaluation metrics. This study confirms the practicality and effectiveness of using thermographic imaging and machine learning for assessing knee pain, providing a new supportive tool for the management of pain in KOA and potentially increasing the objectivity of clinical assessments. The research is primarily focused on the middle-aged and elderly populations. In the future, we plan to extend the use of this technology to monitor risk factors in children’s knees, with the goal of improving their long-term quality of life and enhancing the overall well-being of the population.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102770"},"PeriodicalIF":4.3,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141323313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PalmSecMatch: A data-centric template protection method for palmprint recognition PalmSecMatch:以数据为中心的掌纹识别模板保护方法
IF 4.3 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-08 DOI: 10.1016/j.displa.2024.102771
Chengcheng Liu , Huikai Shao , Dexing Zhong

While existing palmprint recognition researches aim to improve accuracy in various situations, they often overlook the security implications. This paper delves into template protection in palmprint recognition. The existing template protection methods usually cannot strike a well balance between security, accuracy and usability, which reduces the applicability of the algorithms. In this work, a data-centric approach for palmprint template protection is proposed, called PalmSecMatch. Our solution extracts the key from plaintext data. It extremely reduces the dependency on third-party or independent key generation algorithms. The backbone of PalmSecMatch consists of key data extraction and encryption, order shuffling of the raw vectors, hashing code generation, shuffling basis and hashing code fading. PalmSecMatch subtly exploits the fact that biometric data are random variables and benefits from its data-centric nature. PalmSecMatch allows the same plaintext features to be encrypted into highly different ciphertexts, which greatly ensures security. At the same time, the application of data fading strategy makes it extremely difficult for an attacker to distinguish the user data from the auxiliary data. The security analysis shows that PalmSecMatch satisfies the requirements of ISO/IEC 24745. Adequate experiments on two public palmprint databases validate the effectiveness of the proposed method.

虽然现有的掌纹识别研究旨在提高各种情况下的准确率,但它们往往忽视了安全问题。本文探讨了掌纹识别中的模板保护问题。现有的模板保护方法通常不能很好地兼顾安全性、准确性和可用性,从而降低了算法的适用性。本文提出了一种以数据为中心的掌纹模板保护方法,称为 PalmSecMatch。我们的解决方案从明文数据中提取密钥。它极大地减少了对第三方或独立密钥生成算法的依赖。PalmSecMatch 的主干包括密钥数据提取和加密、原始向量的顺序洗牌、散列代码生成、洗牌基础和散列代码消隐。PalmSecMatch 巧妙地利用了生物识别数据是随机变量这一事实,并得益于其以数据为中心的特性。PalmSecMatch 允许将相同的明文特征加密为高度不同的密文,从而极大地确保了安全性。同时,数据消隐策略的应用使攻击者极难区分用户数据和辅助数据。安全分析表明,PalmSecMatch 满足 ISO/IEC 24745 的要求。在两个公共掌纹数据库上进行的充分实验验证了所提方法的有效性。
{"title":"PalmSecMatch: A data-centric template protection method for palmprint recognition","authors":"Chengcheng Liu ,&nbsp;Huikai Shao ,&nbsp;Dexing Zhong","doi":"10.1016/j.displa.2024.102771","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102771","url":null,"abstract":"<div><p>While existing palmprint recognition researches aim to improve accuracy in various situations, they often overlook the security implications. This paper delves into template protection in palmprint recognition. The existing template protection methods usually cannot strike a well balance between security, accuracy and usability, which reduces the applicability of the algorithms. In this work, a data-centric approach for palmprint template protection is proposed, called <em>PalmSecMatch</em>. Our solution extracts the key from plaintext data. It extremely reduces the dependency on third-party or independent key generation algorithms. The backbone of <em>PalmSecMatch</em> consists of key data extraction and encryption, order shuffling of the raw vectors, hashing code generation, shuffling basis and hashing code fading. <em>PalmSecMatch</em> subtly exploits the fact that biometric data are random variables and benefits from its data-centric nature. <em>PalmSecMatch</em> allows the same plaintext features to be encrypted into highly different ciphertexts, which greatly ensures security. At the same time, the application of data fading strategy makes it extremely difficult for an attacker to distinguish the user data from the auxiliary data. The security analysis shows that <em>PalmSecMatch</em> satisfies the requirements of ISO/IEC 24745. Adequate experiments on two public palmprint databases validate the effectiveness of the proposed method.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102771"},"PeriodicalIF":4.3,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141323314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of visual and multisensory augmented reality for precise manual manipulation tasks 比较视觉和多感官增强现实技术在精确手动操作任务中的应用
IF 4.3 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-06 DOI: 10.1016/j.displa.2024.102768
Xiaotian Zhang , Weiping He , Yunfei Qin , Mark Billinghurst , Jiepeng Dong , Daisong Liu , Jilong Bai , Zenglei Wang

Precise manual manipulation is an important skill in daily life, and Augmented Reality (AR) is increasingly being used to support such operations. This article reports on a study investigating the usability of visual and multisensory AR for precise manual manipulation tasks, in particular the representation of detailed deviations from the target pose. Two AR instruction interfaces were developed: the visual deviation instruction and the multisensory deviation instruction. Both interfaces used visual cues to indicate the required directions for manipulation. The difference was that the visual deviation instruction used text and color mapping to represent deviations, whereas the multisensory deviation instruction used sonification and vibration to represent deviations. A user study was conducted with 16 participants to compare the two interfaces. The results found a significant difference only in speed, without significant differences in accuracy, perceived ease-of-use, workload, or custom user experience elements. Multisensory deviation cues can speed up precise manual manipulation compared to visual deviation cues, but inappropriate sonification and vibration strategies can negatively affect users’ subjective experience, offsetting the benefits of multisensory AR. Based on the results, several recommendations were provided for designing AR instruction interfaces to support precise manual manipulation.

精确的手动操作是日常生活中的一项重要技能,而增强现实(AR)正越来越多地被用于支持此类操作。本文报告了一项研究,该研究调查了视觉和多感官 AR 在精确手动操作任务中的可用性,特别是在表示与目标姿势的详细偏差方面。研究人员开发了两种 AR 指令界面:视觉偏差指令和多感官偏差指令。这两种界面都使用视觉线索来指示所需的操作方向。不同之处在于,视觉偏差指令使用文字和颜色映射来表示偏差,而多感官偏差指令则使用声音和振动来表示偏差。我们对 16 名参与者进行了用户研究,以比较这两种界面。结果发现,两种界面仅在速度上存在显著差异,而在准确性、易用性、工作量或自定义用户体验元素方面没有显著差异。与视觉偏差提示相比,多感官偏差提示可以加快精确手动操作的速度,但不恰当的声化和振动策略会对用户的主观体验产生负面影响,从而抵消多感官 AR 带来的好处。根据研究结果,为设计支持精确手动操作的AR教学界面提出了若干建议。
{"title":"Comparison of visual and multisensory augmented reality for precise manual manipulation tasks","authors":"Xiaotian Zhang ,&nbsp;Weiping He ,&nbsp;Yunfei Qin ,&nbsp;Mark Billinghurst ,&nbsp;Jiepeng Dong ,&nbsp;Daisong Liu ,&nbsp;Jilong Bai ,&nbsp;Zenglei Wang","doi":"10.1016/j.displa.2024.102768","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102768","url":null,"abstract":"<div><p>Precise manual manipulation is an important skill in daily life, and Augmented Reality (AR) is increasingly being used to support such operations. This article reports on a study investigating the usability of visual and multisensory AR for precise manual manipulation tasks, in particular the representation of detailed deviations from the target pose. Two AR instruction interfaces were developed: the visual deviation instruction and the multisensory deviation instruction. Both interfaces used visual cues to indicate the required directions for manipulation. The difference was that the visual deviation instruction used text and color mapping to represent deviations, whereas the multisensory deviation instruction used sonification and vibration to represent deviations. A user study was conducted with 16 participants to compare the two interfaces. The results found a significant difference only in speed, without significant differences in accuracy, perceived ease-of-use, workload, or custom user experience elements. Multisensory deviation cues can speed up precise manual manipulation compared to visual deviation cues, but inappropriate sonification and vibration strategies can negatively affect users’ subjective experience, offsetting the benefits of multisensory AR. Based on the results, several recommendations were provided for designing AR instruction interfaces to support precise manual manipulation.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102768"},"PeriodicalIF":4.3,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141314446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LDDG: Long-distance dependent and dual-stream guided feature fusion network for co-saliency object detection LDDG:用于共锯齿物体检测的长距离依赖和双流引导特征融合网络
IF 4.3 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-04 DOI: 10.1016/j.displa.2024.102767
Longsheng Wei , Siyuan Guo , Jiu Huang , Xuan Fan

Complex image scenes are a challenge in the collaborative saliency object detection task in the field of saliency detection, such as the inability to accurately locate salient object, surrounding background information affecting object recognition, and the inability to fuse multi-layer collaborative features well. To solve these problems, we propose a long-range dependent and dual-stream guided feature fusion network. Firstly, we enhance saliency feature by the proposed coordinate attention module so that the network can learn a better feature representation. Secondly, we capture the long-range dependency information of image feature by the proposed non-local module, to obtain more comprehensive contextual complex information. At lastly, we propose a dual-stream guided network to fuse multiple layers of synergistic saliency features. The dual-stream guided network includes classification streams and mask streams, and the layers in the decoding network are guided to fuse the feature of each layer to output more accurate synoptic saliency prediction map. The experimental results show that our method is superior to the existing methods on three common datasets: CoSal2015, CoSOD3k, and CoCA.

在显著性检测领域,复杂图像场景是协同显著性物体检测任务中的一个难题,如无法准确定位显著性物体、周围背景信息影响物体识别以及无法很好地融合多层协同特征等。为了解决这些问题,我们提出了一种长距离依赖和双流引导的特征融合网络。首先,我们通过所提出的协调注意模块来增强显著性特征,从而使网络能够学习到更好的特征表示。其次,通过非局部模块捕捉图像特征的长程依赖信息,从而获得更全面的上下文复合信息。最后,我们提出了双流引导网络来融合多层协同突出特征。双流引导网络包括分类流和掩码流,解码网络中的各层在引导下融合各层的特征,从而输出更准确的显著性同步预测图。实验结果表明,在三个常见数据集上,我们的方法优于现有方法:CoSal2015、CoSOD3k 和 CoCA。
{"title":"LDDG: Long-distance dependent and dual-stream guided feature fusion network for co-saliency object detection","authors":"Longsheng Wei ,&nbsp;Siyuan Guo ,&nbsp;Jiu Huang ,&nbsp;Xuan Fan","doi":"10.1016/j.displa.2024.102767","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102767","url":null,"abstract":"<div><p>Complex image scenes are a challenge in the collaborative saliency object detection task in the field of saliency detection, such as the inability to accurately locate salient object, surrounding background information affecting object recognition, and the inability to fuse multi-layer collaborative features well. To solve these problems, we propose a long-range dependent and dual-stream guided feature fusion network. Firstly, we enhance saliency feature by the proposed coordinate attention module so that the network can learn a better feature representation. Secondly, we capture the long-range dependency information of image feature by the proposed non-local module, to obtain more comprehensive contextual complex information. At lastly, we propose a dual-stream guided network to fuse multiple layers of synergistic saliency features. The dual-stream guided network includes classification streams and mask streams, and the layers in the decoding network are guided to fuse the feature of each layer to output more accurate synoptic saliency prediction map. The experimental results show that our method is superior to the existing methods on three common datasets: CoSal2015, CoSOD3k, and CoCA.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102767"},"PeriodicalIF":4.3,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141294385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1