首页 > 最新文献

Displays最新文献

英文 中文
Reinforcement learning path planning method incorporating multi-step Hindsight Experience Replay for lightweight robots 针对轻型机器人的包含多步 "后见之明 "经验回放的强化学习路径规划方法
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-14 DOI: 10.1016/j.displa.2024.102796
Jiaqi Wang, Huiyan Han, Xie Han, Liqun Kuang, Xiaowen Yang

Home service robots prioritize cost-effectiveness and convenience over the precision required for industrial tasks like autonomous driving, making their task execution more easily. Meanwhile, path planning tasks using Deep Reinforcement Learning(DRL) are commonly sparse reward problems with limited data utilization, posing challenges in obtaining meaningful rewards during training, consequently resulting in slow or challenging training. In response to these challenges, our paper introduces a lightweight end-to-end path planning algorithm employing with hindsight experience replay(HER). Initially, we optimize the reinforcement learning training process from scratch and map the complex high-dimensional action space and state space to the representative low-dimensional action space. At the same time, we improve the network structure to decouple the model navigation and obstacle avoidance module to meet the requirements of lightweight. Subsequently, we integrate HER and curriculum learning (CL) to tackle issues related to inefficient training. Additionally, we propose a multi-step hindsight experience replay (MS-HER) specifically for the path planning task, markedly enhancing both training efficiency and model generalization across diverse environments. To substantiate the enhanced training efficiency of the refined algorithm, we conducted tests within diverse Gazebo simulation environments. Results of the experiments reveal noteworthy enhancements in critical metrics, including success rate and training efficiency. To further ascertain the enhanced algorithm’s generalization capability, we evaluate its performance in some ”never-before-seen” simulation environment. Ultimately, we deploy the trained model onto a real lightweight robot for validation. The experimental outcomes indicate the model’s competence in successfully executing the path planning task, even on a small robot with constrained computational resources.

与自动驾驶等工业任务所需的精度相比,家用服务机器人更注重成本效益和便利性,这使其更容易执行任务。与此同时,使用深度强化学习(DRL)的路径规划任务通常是数据利用率有限的稀疏奖励问题,在训练过程中难以获得有意义的奖励,从而导致训练速度缓慢或训练难度增加。为了应对这些挑战,我们的论文介绍了一种采用事后经验重放(HER)的轻量级端到端路径规划算法。首先,我们从头开始优化强化学习训练过程,将复杂的高维行动空间和状态空间映射到有代表性的低维行动空间。同时,我们改进了网络结构,将模型导航和避障模块解耦,以满足轻量级的要求。随后,我们整合了 HER 和课程学习(CL),以解决训练效率低下的相关问题。此外,我们还针对路径规划任务提出了多步骤后见经验重放(MS-HER),显著提高了训练效率和模型在不同环境下的泛化能力。为了证实改进算法提高了训练效率,我们在不同的 Gazebo 仿真环境中进行了测试。实验结果表明,成功率和训练效率等关键指标都有显著提高。为了进一步确定增强算法的泛化能力,我们在一些 "前所未见 "的模拟环境中对其性能进行了评估。最后,我们将训练好的模型部署到一个真正的轻型机器人上进行验证。实验结果表明,即使在计算资源有限的小型机器人上,该模型也能成功执行路径规划任务。
{"title":"Reinforcement learning path planning method incorporating multi-step Hindsight Experience Replay for lightweight robots","authors":"Jiaqi Wang,&nbsp;Huiyan Han,&nbsp;Xie Han,&nbsp;Liqun Kuang,&nbsp;Xiaowen Yang","doi":"10.1016/j.displa.2024.102796","DOIUrl":"10.1016/j.displa.2024.102796","url":null,"abstract":"<div><p>Home service robots prioritize cost-effectiveness and convenience over the precision required for industrial tasks like autonomous driving, making their task execution more easily. Meanwhile, path planning tasks using Deep Reinforcement Learning(DRL) are commonly sparse reward problems with limited data utilization, posing challenges in obtaining meaningful rewards during training, consequently resulting in slow or challenging training. In response to these challenges, our paper introduces a lightweight end-to-end path planning algorithm employing with hindsight experience replay(HER). Initially, we optimize the reinforcement learning training process from scratch and map the complex high-dimensional action space and state space to the representative low-dimensional action space. At the same time, we improve the network structure to decouple the model navigation and obstacle avoidance module to meet the requirements of lightweight. Subsequently, we integrate HER and curriculum learning (CL) to tackle issues related to inefficient training. Additionally, we propose a multi-step hindsight experience replay (MS-HER) specifically for the path planning task, markedly enhancing both training efficiency and model generalization across diverse environments. To substantiate the enhanced training efficiency of the refined algorithm, we conducted tests within diverse Gazebo simulation environments. Results of the experiments reveal noteworthy enhancements in critical metrics, including success rate and training efficiency. To further ascertain the enhanced algorithm’s generalization capability, we evaluate its performance in some ”never-before-seen” simulation environment. Ultimately, we deploy the trained model onto a real lightweight robot for validation. The experimental outcomes indicate the model’s competence in successfully executing the path planning task, even on a small robot with constrained computational resources.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102796"},"PeriodicalIF":3.7,"publicationDate":"2024-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141690713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reduction of short-time image sticking in organic light-emitting diode display through transient analysis of low-temperature polycrystalline silicon thin-film transistor 通过低温多晶硅薄膜晶体管的瞬态分析减少有机发光二极管显示屏的短时图像粘滞现象
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-09 DOI: 10.1016/j.displa.2024.102794
Jiwook Hong , Jaewon Lim , Jongwook Jeon

Accurate compensation operation of low-temperature polycrystalline-silicon (LTPS) thin-film transistor (TFT) in pixel circuits is crucial to achieve steady and uniform luminance in organic light-emitting diode (OLED) display panels. However, the device characteristics fluctuate over time due to various traps in the LTPS thin film transistor and at the interface with the gate insulator, resulting in abnormal phenomena such as short-time image sticking and luminance fluctuation, which degrade display quality during image change. Considering these phenomena, transient analysis was conducted through device simulation to optimize the pixel compensation circuit. In particular, we analyzed the behavior of traps within LTPS TFT in correlation with compensation circuit operation, and based on this, proposed a methodology for designing a reset voltage scheme for the driver TFT to reduce the image sticking phenomenon.

低温多晶硅(LTPS)薄膜晶体管(TFT)在像素电路中的精确补偿操作对于实现有机发光二极管(OLED)显示面板的稳定和均匀亮度至关重要。然而,由于 LTPS 薄膜晶体管中以及与栅极绝缘体接口处存在各种陷阱,器件特性会随时间发生波动,从而导致短时图像粘连和亮度波动等异常现象,在图像变化时降低显示质量。考虑到这些现象,我们通过器件仿真进行了瞬态分析,以优化像素补偿电路。特别是,我们分析了 LTPS TFT 内陷阱的行为与补偿电路工作的相关性,并在此基础上提出了设计驱动 TFT 复位电压方案的方法,以减少图像粘连现象。
{"title":"Reduction of short-time image sticking in organic light-emitting diode display through transient analysis of low-temperature polycrystalline silicon thin-film transistor","authors":"Jiwook Hong ,&nbsp;Jaewon Lim ,&nbsp;Jongwook Jeon","doi":"10.1016/j.displa.2024.102794","DOIUrl":"10.1016/j.displa.2024.102794","url":null,"abstract":"<div><p>Accurate compensation operation of low-temperature polycrystalline-silicon (LTPS) thin-film transistor (TFT) in pixel circuits is crucial to achieve steady and uniform luminance in organic light-emitting diode (OLED) display panels. However, the device characteristics fluctuate over time due to various traps in the LTPS thin film transistor and at the interface with the gate insulator, resulting in abnormal phenomena such as short-time image sticking and luminance fluctuation, which degrade display quality during image change. Considering these phenomena, transient analysis was conducted through device simulation to optimize the pixel compensation circuit. In particular, we analyzed the behavior of traps within LTPS TFT in correlation with compensation circuit operation, and based on this, proposed a methodology for designing a reset voltage scheme for the driver TFT to reduce the image sticking phenomenon.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102794"},"PeriodicalIF":3.7,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0141938224001586/pdfft?md5=af589a6e358a315d9e0495f42299ea93&pid=1-s2.0-S0141938224001586-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141697954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSAug: Multi-Strategy Augmentation for rare classes in semantic segmentation of remote sensing images MSAug:遥感图像语义分割中稀有类别的多策略增强功能
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-08 DOI: 10.1016/j.displa.2024.102779
Zhi Gong , Lijuan Duan , Fengjin Xiao , Yuxi Wang

Recently, remote sensing images have been widely used in many scenarios, gradually becoming the focus of social attention. Nevertheless, the limited annotation of scarce classes severely reduces segmentation performance. This phenomenon is more prominent in remote sensing image segmentation. Given this, we focus on image fusion and model feedback, proposing a multi-strategy method called MSAug to address the remote sensing imbalance problem. Firstly, we crop rare class images multiple times based on prior knowledge at the image patch level to provide more balanced samples. Secondly, we design an adaptive image enhancement module at the model feedback level to accurately classify rare classes at each stage and dynamically paste and mask different classes to further improve the model’s recognition capabilities. The MSAug method is highly flexible and can be plug-and-play. Experimental results on remote sensing image segmentation datasets show that adding MSAug to any remote sensing image semantic segmentation network can bring varying degrees of performance improvement.

近年来,遥感图像被广泛应用于多种场景,逐渐成为社会关注的焦点。然而,对稀缺类别的有限标注严重降低了分割性能。这一现象在遥感图像分割中更为突出。有鉴于此,我们将重点放在图像融合和模型反馈上,提出了一种名为 MSAug 的多策略方法来解决遥感失衡问题。首先,我们根据图像斑块层面的先验知识对稀有类图像进行多次裁剪,以提供更均衡的样本。其次,我们在模型反馈层面设计了一个自适应图像增强模块,以便在每个阶段对稀有类别进行准确分类,并动态粘贴和屏蔽不同类别,进一步提高模型的识别能力。MSAug 方法非常灵活,可以即插即用。在遥感图像分割数据集上的实验结果表明,在任何遥感图像语义分割网络中添加 MSAug 都能带来不同程度的性能提升。
{"title":"MSAug: Multi-Strategy Augmentation for rare classes in semantic segmentation of remote sensing images","authors":"Zhi Gong ,&nbsp;Lijuan Duan ,&nbsp;Fengjin Xiao ,&nbsp;Yuxi Wang","doi":"10.1016/j.displa.2024.102779","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102779","url":null,"abstract":"<div><p>Recently, remote sensing images have been widely used in many scenarios, gradually becoming the focus of social attention. Nevertheless, the limited annotation of scarce classes severely reduces segmentation performance. This phenomenon is more prominent in remote sensing image segmentation. Given this, we focus on image fusion and model feedback, proposing a multi-strategy method called MSAug to address the remote sensing imbalance problem. Firstly, we crop rare class images multiple times based on prior knowledge at the image patch level to provide more balanced samples. Secondly, we design an adaptive image enhancement module at the model feedback level to accurately classify rare classes at each stage and dynamically paste and mask different classes to further improve the model’s recognition capabilities. The MSAug method is highly flexible and can be plug-and-play. Experimental results on remote sensing image segmentation datasets show that adding MSAug to any remote sensing image semantic segmentation network can bring varying degrees of performance improvement.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102779"},"PeriodicalIF":3.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141605245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ADS-VQA: Adaptive sampling model for video quality assessment ADS-VQA:用于视频质量评估的自适应采样模型
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-04 DOI: 10.1016/j.displa.2024.102792
Shuaibo Cheng, Xiaopeng Li, Zhaoyuan Zeng, Jia Yan

No-reference video quality assessment (NR-VQA) for user-generated content (UGC) plays a crucial role in ensuring the quality of video services. Although some works have achieved impressive results, their performance-complexity trade-off is still sub-optimal. On the one hand, overly complex network structures and additional inputs require more computing resources. On the other hand, the simple sampling methods have tended to overlook the temporal characteristics of the videos, resulting in the degradation of local textures and potential distortion of the thematic content, consequently leading to the performance decline of the VQA technologies. Therefore, in this paper, we propose an enhanced NR-VQA model, known as the Adaptive Sampling Strategy for Video Quality Assessment (ADS-VQA). Temporally, we conduct non-uniform sampling on videos utilizing features from the lateral geniculate nucleus (LGN) to capture the temporal characteristics of videos. Spatially, a dual-branch structure is designed to supplement spatial features across different levels. The one branch samples patches at their raw resolution, effectively preserving the local texture detail. The other branch performs a downsampling process guided by saliency cues, attaining global semantic features with a diminished computational expense. Experimental results demonstrate that the proposed approach achieves high performance at a lower computational cost than most state-of-the-art VQA models on four popular VQA databases.

针对用户生成内容(UGC)的无参考视频质量评估(NR-VQA)在确保视频服务质量方面发挥着至关重要的作用。尽管一些研究取得了令人瞩目的成果,但其性能与复杂性之间的权衡仍未达到最佳状态。一方面,过于复杂的网络结构和额外的输入需要更多的计算资源。另一方面,简单的采样方法往往会忽略视频的时间特性,造成局部纹理的退化和主题内容的潜在失真,从而导致 VQA 技术的性能下降。因此,我们在本文中提出了一种增强型 NR-VQA 模型,即视频质量评估的自适应采样策略(ADS-VQA)。在时间上,我们利用外侧膝状核(LGN)的特征对视频进行非均匀采样,以捕捉视频的时间特征。在空间上,我们设计了一个双分支结构来补充不同层次的空间特征。一个分支以原始分辨率对补丁进行采样,有效地保留了局部纹理细节。另一个分支则在显著性线索的引导下执行降采样过程,从而以较低的计算成本获得全局语义特征。实验结果表明,在四个流行的 VQA 数据库上,与大多数最先进的 VQA 模型相比,所提出的方法以更低的计算成本实现了更高的性能。
{"title":"ADS-VQA: Adaptive sampling model for video quality assessment","authors":"Shuaibo Cheng,&nbsp;Xiaopeng Li,&nbsp;Zhaoyuan Zeng,&nbsp;Jia Yan","doi":"10.1016/j.displa.2024.102792","DOIUrl":"10.1016/j.displa.2024.102792","url":null,"abstract":"<div><p>No-reference video quality assessment (NR-VQA) for user-generated content (UGC) plays a crucial role in ensuring the quality of video services. Although some works have achieved impressive results, their performance-complexity trade-off is still sub-optimal. On the one hand, overly complex network structures and additional inputs require more computing resources. On the other hand, the simple sampling methods have tended to overlook the temporal characteristics of the videos, resulting in the degradation of local textures and potential distortion of the thematic content, consequently leading to the performance decline of the VQA technologies. Therefore, in this paper, we propose an enhanced NR-VQA model, known as the Adaptive Sampling Strategy for Video Quality Assessment (ADS-VQA). Temporally, we conduct non-uniform sampling on videos utilizing features from the lateral geniculate nucleus (LGN) to capture the temporal characteristics of videos. Spatially, a dual-branch structure is designed to supplement spatial features across different levels. The one branch samples patches at their raw resolution, effectively preserving the local texture detail. The other branch performs a downsampling process guided by saliency cues, attaining global semantic features with a diminished computational expense. Experimental results demonstrate that the proposed approach achieves high performance at a lower computational cost than most state-of-the-art VQA models on four popular VQA databases.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102792"},"PeriodicalIF":3.7,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141636469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridge the gap between practical application scenarios and cartoon character detection: A benchmark dataset and deep learning model 缩小实际应用场景与卡通人物检测之间的差距:基准数据集和深度学习模型
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-04 DOI: 10.1016/j.displa.2024.102793
Zelu Qi, Da Pan, Tianyi Niu, Zefeng Ying, Ping Shi

The success of deep learning in the field of computer vision makes cartoon character detection (CCD) based on target detection expected to become an effective means of protecting intellectual property rights. However, due to the lack of suitable cartoon character datasets, CCD is still a less explored field, and there are still many problems that need to be solved to meet the needs of practical applications such as merchandise, advertising, and patent review. In this paper, we propose a new challenging CCD benchmark dataset, called CCDaS, which consists of 140,339 images of 524 famous cartoon characters from 227 cartoon works, game works, and merchandise innovations. As far as we know, CCDaS is currently the largest dataset of CCD in practical application scenarios. To further study CCD, we also provide a CCD algorithm that can achieve accurate detection of multi-scale objects and facially similar objects in practical application scenarios, called multi-path YOLO (MP-YOLO). Experimental results show that our MP-YOLO achieves better detection results on the CCDaS dataset. Comparative and ablation studies further validate the effectiveness of our CCD dataset and algorithm.

深度学习在计算机视觉领域的成功,使得基于目标检测的卡通人物检测(CCD)有望成为保护知识产权的有效手段。然而,由于缺乏合适的卡通人物数据集,CCD仍是一个探索较少的领域,要满足商品、广告和专利审查等实际应用的需求,仍有许多问题亟待解决。本文提出了一个新的具有挑战性的 CCD 基准数据集,称为 CCDaS,由来自 227 部卡通作品、游戏作品和商品创新作品的 524 个著名卡通人物的 140 339 张图像组成。据我们所知,CCDaS 是目前实际应用场景中最大的 CCD 数据集。为了进一步研究 CCD,我们还提供了一种能在实际应用场景中实现多尺度物体和面相相似物体精确检测的 CCD 算法,称为多路径 YOLO(MP-YOLO)。实验结果表明,我们的 MP-YOLO 在 CCDaS 数据集上取得了更好的检测结果。对比和烧蚀研究进一步验证了我们的 CCD 数据集和算法的有效性。
{"title":"Bridge the gap between practical application scenarios and cartoon character detection: A benchmark dataset and deep learning model","authors":"Zelu Qi,&nbsp;Da Pan,&nbsp;Tianyi Niu,&nbsp;Zefeng Ying,&nbsp;Ping Shi","doi":"10.1016/j.displa.2024.102793","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102793","url":null,"abstract":"<div><p>The success of deep learning in the field of computer vision makes cartoon character detection (CCD) based on target detection expected to become an effective means of protecting intellectual property rights. However, due to the lack of suitable cartoon character datasets, CCD is still a less explored field, and there are still many problems that need to be solved to meet the needs of practical applications such as merchandise, advertising, and patent review. In this paper, we propose a new challenging CCD benchmark dataset, called CCDaS, which consists of 140,339 images of 524 famous cartoon characters from 227 cartoon works, game works, and merchandise innovations. As far as we know, CCDaS is currently the largest dataset of CCD in practical application scenarios. To further study CCD, we also provide a CCD algorithm that can achieve accurate detection of multi-scale objects and facially similar objects in practical application scenarios, called multi-path YOLO (MP-YOLO). Experimental results show that our MP-YOLO achieves better detection results on the CCDaS dataset. Comparative and ablation studies further validate the effectiveness of our CCD dataset and algorithm.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102793"},"PeriodicalIF":3.7,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141596623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-stage coarse-to-fine progressive enhancement network for single-image HDR reconstruction 用于单图像 HDR 重建的多级粗到细逐行增强网络
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-03 DOI: 10.1016/j.displa.2024.102791
Wei Zhang , Gangyi Jiang , Yeyao Chen , Haiyong Xu , Hao Jiang , Mei Yu

Compared with traditional imaging, high dynamic range (HDR) imaging technology can record scene information more accurately, thereby providing users higher quality of visual experience. Inverse tone mapping is a direct and effective way to realize single-image HDR reconstruction, but it usually suffers from some problems such as detail loss, color deviation and artifacts. To solve the problems, this paper proposes a multi-stage coarse-to-fine progressive enhancement network (named MSPENet) for single-image HDR reconstruction. The entire multi-stage network architecture is designed in a progressive manner to obtain higher-quality HDR images from coarse-to-fine, where a mask mechanism is used to eliminate the effects of over-exposure regions. Specifically, in the first two stages, two asymmetric U-Nets are constructed to learn the multi-scale information of input image and perform coarse reconstruction. In the third stage, a residual network with channel attention mechanism is constructed to learn the fusion of progressively transferred multi-level features and perform fine reconstruction. In addition, a multi-stage progressive detail enhancement mechanism is designed, including progressive gated recurrent unit fusion mechanism and multi-stage feature transfer mechanism. The former fuses the progressively transferred features with coarse HDR features to reduce the error stacking effect caused by multi-stage networks. Meanwhile, the latter fuses early features to supplement the lost information during each stage of feature delivery and combines features from different stages. Extensive experimental results show that the proposed method can reconstruct higher quality HDR images and effectively recover texture and color information in over-exposure regions compared to the state-of-the-art methods.

与传统成像技术相比,高动态范围(HDR)成像技术能更精确地记录场景信息,从而为用户提供更高质量的视觉体验。反色调映射是实现单幅图像 HDR 重建的一种直接而有效的方法,但它通常存在细节丢失、色彩偏差和伪影等问题。为了解决这些问题,本文提出了一种用于单图像 HDR 重建的多级粗到细渐进增强网络(命名为 MSPENet)。整个多级网络架构采用渐进式设计,从粗到细获得更高质量的 HDR 图像,其中使用了掩码机制来消除过曝区域的影响。具体来说,在前两个阶段,构建两个非对称 U-Net 来学习输入图像的多尺度信息并进行粗重建。在第三阶段,构建一个具有通道注意机制的残差网络,以学习逐步转移的多级特征的融合,并执行精细重建。此外,还设计了一种多级渐进细节增强机制,包括渐进门控递归单元融合机制和多级特征转移机制。前者将渐进转移的特征与粗略的 HDR 特征融合,以减少多级网络造成的误差叠加效应。同时,后者融合早期特征以补充每个阶段特征传递过程中丢失的信息,并将不同阶段的特征结合起来。大量实验结果表明,与最先进的方法相比,所提出的方法能重建更高质量的 HDR 图像,并有效恢复过曝区域的纹理和色彩信息。
{"title":"Multi-stage coarse-to-fine progressive enhancement network for single-image HDR reconstruction","authors":"Wei Zhang ,&nbsp;Gangyi Jiang ,&nbsp;Yeyao Chen ,&nbsp;Haiyong Xu ,&nbsp;Hao Jiang ,&nbsp;Mei Yu","doi":"10.1016/j.displa.2024.102791","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102791","url":null,"abstract":"<div><p>Compared with traditional imaging, high dynamic range (HDR) imaging technology can record scene information more accurately, thereby providing users higher quality of visual experience. Inverse tone mapping is a direct and effective way to realize single-image HDR reconstruction, but it usually suffers from some problems such as detail loss, color deviation and artifacts. To solve the problems, this paper proposes a multi-stage coarse-to-fine progressive enhancement network (named MSPENet) for single-image HDR reconstruction. The entire multi-stage network architecture is designed in a progressive manner to obtain higher-quality HDR images from coarse-to-fine, where a mask mechanism is used to eliminate the effects of over-exposure regions. Specifically, in the first two stages, two asymmetric U-Nets are constructed to learn the multi-scale information of input image and perform coarse reconstruction. In the third stage, a residual network with channel attention mechanism is constructed to learn the fusion of progressively transferred multi-level features and perform fine reconstruction. In addition, a multi-stage progressive detail enhancement mechanism is designed, including progressive gated recurrent unit fusion mechanism and multi-stage feature transfer mechanism. The former fuses the progressively transferred features with coarse HDR features to reduce the error stacking effect caused by multi-stage networks. Meanwhile, the latter fuses early features to supplement the lost information during each stage of feature delivery and combines features from different stages. Extensive experimental results show that the proposed method can reconstruct higher quality HDR images and effectively recover texture and color information in over-exposure regions compared to the state-of-the-art methods.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102791"},"PeriodicalIF":3.7,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141596622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring product style perception: A comparative eye-tracking analysis of users across varying levels of self-monitoring 探索产品风格感知:对不同自我监控水平用户的眼动跟踪比较分析
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-28 DOI: 10.1016/j.displa.2024.102790
Yao Wang, Yang Lu, Cheng-Yi Shen, Shi-Jian Luo, Long-Yu Zhang

Digital shopping applications and platforms offer consumers a numerous array of products with diverse styles and style attributes. Existing literature suggests that style preferences are determined by consumers’ genders, ages, education levels, and nationalities. In this study, we argue the feasibility and necessity of self-monitoring as an additional consumer variable impacting product style perception and preference through the utilization of eye-tracking technology. Three eye-movement experiments were conducted on forty-two participants (twenty males and twenty-two females; Age: M = 22.8, SD = 1.63). The results showed participants with higher levels of self-monitoring exhibited shorter total fixation durations and fewer fixation counts while examining images of watch product styles. In addition, gender exerted an interaction effect on self-monitoring’s impact, with female participants of high self-monitoring ability able to perceive differences in product styles more rapidly and with greater sensitivity. Overall, the results highlight the utility of self-monitoring as a research variable in product style perception investigations, as well as its implication for style intelligence classifiers, and style neuroimaging.

数字购物应用程序和平台为消费者提供了大量风格和风格属性各异的产品。现有文献表明,风格偏好由消费者的性别、年龄、教育水平和国籍决定。在本研究中,我们通过眼动跟踪技术,论证了将自我监控作为影响产品风格感知和偏好的额外消费者变量的可行性和必要性。我们对 42 名参与者(20 名男性和 22 名女性;年龄:M = 22.8,SD = 1.63)进行了三次眼动实验。结果显示,自我监控水平较高的受试者在观察手表产品款式图片时,总定格时间较短,定格次数较少。此外,性别对自我监控的影响具有交互作用,自我监控能力强的女性参与者能够更快、更敏感地感知产品风格的差异。总之,研究结果凸显了自我监控作为产品风格感知研究变量的实用性,及其对风格智能分类器和风格神经成像的影响。
{"title":"Exploring product style perception: A comparative eye-tracking analysis of users across varying levels of self-monitoring","authors":"Yao Wang,&nbsp;Yang Lu,&nbsp;Cheng-Yi Shen,&nbsp;Shi-Jian Luo,&nbsp;Long-Yu Zhang","doi":"10.1016/j.displa.2024.102790","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102790","url":null,"abstract":"<div><p>Digital shopping applications and platforms offer consumers a numerous array of products with diverse styles and style attributes. Existing literature suggests that style preferences are determined by consumers’ genders, ages, education levels, and nationalities. In this study, we argue the feasibility and necessity of self-monitoring as an additional consumer variable impacting product style perception and preference through the utilization of eye-tracking technology. Three eye-movement experiments were conducted on forty-two participants (twenty males and twenty-two females; Age: M <span><math><mo>=</mo></math></span> 22.8, SD <span><math><mo>=</mo></math></span> 1.63). The results showed participants with higher levels of self-monitoring exhibited shorter total fixation durations and fewer fixation counts while examining images of watch product styles. In addition, gender exerted an interaction effect on self-monitoring’s impact, with female participants of high self-monitoring ability able to perceive differences in product styles more rapidly and with greater sensitivity. Overall, the results highlight the utility of self-monitoring as a research variable in product style perception investigations, as well as its implication for style intelligence classifiers, and style neuroimaging.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102790"},"PeriodicalIF":3.7,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141605246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Viewing preferences of ASD children on paintings 自闭症儿童的绘画观赏偏好
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-27 DOI: 10.1016/j.displa.2024.102788
Ji-Feng Luo , Xinding Xia , Zhihao Wang , Fangyu Shi , Zhijuan Jin

The eye movement patterns of children with autism spectrum disorder (ASD) based on high-precision, high-sampling-rate professional eye trackers have been widely studied. Still, the equipment used in these studies is expensive and requires skilled operators, and the stimuli are focused on pictures or videos created out of ASD group. We utilized a previously developed eye-tracking device using a tablet, and the double-column paintings with one column from children with ASD, and the other from typically developing (TD) children as stimuli, to investigate the preference of ASD children for the paintings created within their group. This study collected eye movement data from 82 children with ASD and 102 TD children, and an adaptive eye movement classification algorithm was applied to the data aligned by the sampling rate, followed by feature extraction and statistical analysis from the aspects of time, frequency, range, and clustering. Statistical tests indicate that apart from displaying more pronounced non-compliance during the experiment, resulting in a higher data loss rate, children with ASD did not show significant preferences in viewing the two types of paintings compared to TD children. Therefore, we tend to believe that there is no significant difference in preference for ASD and TD paintings showcasing as diptych from the two groups of children using our eye tracking device, and their feature values indicate that they do not have a viewing preference for the paintings.

基于高精度、高采样率的专业眼动仪,自闭症谱系障碍(ASD)儿童的眼动模式已被广泛研究。不过,这些研究使用的设备价格昂贵,需要熟练的操作人员,而且刺激物主要是自闭症谱系障碍群体制作的图片或视频。我们利用之前开发的眼动追踪设备,使用平板电脑和双列绘画(其中一列来自 ASD 儿童,另一列来自典型发育(TD)儿童)作为刺激物,来研究 ASD 儿童对其所在群体创作的绘画的偏好。本研究收集了 82 名 ASD 儿童和 102 名 TD 儿童的眼动数据,并采用自适应眼动分类算法对数据进行采样率对齐,然后从时间、频率、范围和聚类等方面进行特征提取和统计分析。统计检验结果表明,除了在实验过程中表现出更明显的不服从性,导致数据丢失率较高之外,与 TD 儿童相比,ASD 儿童在观看两种类型的绘画时并没有表现出明显的偏好。因此,我们倾向于认为,使用眼动仪的两组儿童对 ASD 和 TD 绘画双连画的偏好没有明显差异,他们的特征值表明他们对绘画没有观看偏好。
{"title":"Viewing preferences of ASD children on paintings","authors":"Ji-Feng Luo ,&nbsp;Xinding Xia ,&nbsp;Zhihao Wang ,&nbsp;Fangyu Shi ,&nbsp;Zhijuan Jin","doi":"10.1016/j.displa.2024.102788","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102788","url":null,"abstract":"<div><p>The eye movement patterns of children with autism spectrum disorder (ASD) based on high-precision, high-sampling-rate professional eye trackers have been widely studied. Still, the equipment used in these studies is expensive and requires skilled operators, and the stimuli are focused on pictures or videos created out of ASD group. We utilized a previously developed eye-tracking device using a tablet, and the double-column paintings with one column from children with ASD, and the other from typically developing (TD) children as stimuli, to investigate the preference of ASD children for the paintings created within their group. This study collected eye movement data from 82 children with ASD and 102 TD children, and an adaptive eye movement classification algorithm was applied to the data aligned by the sampling rate, followed by feature extraction and statistical analysis from the aspects of time, frequency, range, and clustering. Statistical tests indicate that apart from displaying more pronounced non-compliance during the experiment, resulting in a higher data loss rate, children with ASD did not show significant preferences in viewing the two types of paintings compared to TD children. Therefore, we tend to believe that there is no significant difference in preference for ASD and TD paintings showcasing as diptych from the two groups of children using our eye tracking device, and their feature values indicate that they do not have a viewing preference for the paintings.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102788"},"PeriodicalIF":3.7,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141541041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A pyramid auxiliary supervised U-Net model for road crack detection with dual-attention mechanism 用于双关注机制道路裂缝检测的金字塔辅助监督 U-Net 模型
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-27 DOI: 10.1016/j.displa.2024.102787
Yingxiang Lu, Guangyuan Zhang, Shukai Duan, Feng Chen

The application of road crack detection technology plays a pivotal role in the domain of transportation infrastructure management. However, the diversity of crack morphologies within images and the complexity of background noise still pose significant challenges to automated detection technologies. This necessitates that deep learning models possess more precise feature extraction capabilities and resistance to noise interference. In this paper, we propose a pyramid auxiliary supervised U-Net model with Dual-Attention mechanism. Pyramid auxiliary supervision module is integrated into the U-Net model, alleviating information loss at the encoder end due to pooling operations, thereby enhancing its global perception capability. Besides, within dual-attention module, our model learns crucial segmentation features both at the pixel and channel levels. These enable our model to avoid noise interference and achieve a higher level of precision in crack pixel segmentation. To substantiate the superiority and generalizability of our model, we conducted a comprehensive performance evaluation using public datasets. The experimental results indicate that our model surpasses current great methods. Additionally, we performed ablation studies to confirm the efficacy of the proposed modules.

道路裂缝检测技术的应用在交通基础设施管理领域发挥着举足轻重的作用。然而,图像中裂缝形态的多样性和背景噪声的复杂性仍然给自动检测技术带来了巨大挑战。这就要求深度学习模型具备更精确的特征提取能力和抗噪声干扰能力。本文提出了一种具有双关注机制的金字塔辅助监督 U-Net 模型。将金字塔辅助监督模块集成到U-Net模型中,减轻了编码器端由于池化操作造成的信息丢失,从而增强了其全局感知能力。此外,在双关注模块中,我们的模型还能学习像素和通道级别的关键分割特征。这使得我们的模型能够避免噪声干扰,并在裂缝像素分割方面达到更高的精度。为了证实我们的模型的优越性和通用性,我们使用公共数据集进行了全面的性能评估。实验结果表明,我们的模型超越了现有的优秀方法。此外,我们还进行了烧蚀研究,以证实所提模块的功效。
{"title":"A pyramid auxiliary supervised U-Net model for road crack detection with dual-attention mechanism","authors":"Yingxiang Lu,&nbsp;Guangyuan Zhang,&nbsp;Shukai Duan,&nbsp;Feng Chen","doi":"10.1016/j.displa.2024.102787","DOIUrl":"https://doi.org/10.1016/j.displa.2024.102787","url":null,"abstract":"<div><p>The application of road crack detection technology plays a pivotal role in the domain of transportation infrastructure management. However, the diversity of crack morphologies within images and the complexity of background noise still pose significant challenges to automated detection technologies. This necessitates that deep learning models possess more precise feature extraction capabilities and resistance to noise interference. In this paper, we propose a pyramid auxiliary supervised U-Net model with Dual-Attention mechanism. Pyramid auxiliary supervision module is integrated into the U-Net model, alleviating information loss at the encoder end due to pooling operations, thereby enhancing its global perception capability. Besides, within dual-attention module, our model learns crucial segmentation features both at the pixel and channel levels. These enable our model to avoid noise interference and achieve a higher level of precision in crack pixel segmentation. To substantiate the superiority and generalizability of our model, we conducted a comprehensive performance evaluation using public datasets. The experimental results indicate that our model surpasses current great methods. Additionally, we performed ablation studies to confirm the efficacy of the proposed modules.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102787"},"PeriodicalIF":3.7,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141480278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Elemental image array generation based on BVH structure combined with spatial partition and display optimization 基于 BVH 结构的元素图像阵列生成与空间分割和显示优化相结合
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-06-27 DOI: 10.1016/j.displa.2024.102784
Tianshu Li, Shigang Wang, Jian Wei, Yan Zhao, Chenxi song, Rui Zhang

Integral imaging display has been widely used because of its features such as full parallax viewing, high comfort without glasses, simple structure and easy implementation. This paper enhances the speed of EIA generation by optimizing the acceleration structure of the ray tracing algorithm. By considering the characteristics of segmental objects captured by the camera during integral image rendering, a novel accelerating structure is constructed by combining BVH with the camera space. The BVH traversal is expanded into a 4-tree based on depth priority order to reduce hierarchy and expedite hit point detection. Additionally, the parameters of the camera array are constrained according to the reconstructed three-dimensional (3D) image range, ensuring optimal object coverage on screen. Experimental results demonstrate that this algorithm reduces ray tracing time for hitting triangle grid of collision objects while automatically determining display range for stereo images and adjusting camera parameters accordingly, thereby maximizing utilization of integrated imaging display resources.

整体成像显示器具有全视差观看、无需佩戴眼镜、舒适度高、结构简单、易于实现等特点,因此得到了广泛应用。本文通过优化光线追踪算法的加速结构,提高了 EIA 生成的速度。通过考虑整体图像渲染过程中摄像机捕捉到的分段对象的特征,结合 BVH 和摄像机空间,构建了一种新的加速结构。BVH 遍历扩展为基于深度优先级的 4 树,以减少层级并加速命中点检测。此外,摄像机阵列的参数根据重建的三维(3D)图像范围进行约束,以确保在屏幕上实现最佳的物体覆盖。实验结果表明,该算法在自动确定立体图像显示范围并相应调整摄像机参数的同时,减少了碰撞物体三角形网格的光线跟踪时间,从而最大限度地利用了集成成像显示资源。
{"title":"Elemental image array generation based on BVH structure combined with spatial partition and display optimization","authors":"Tianshu Li,&nbsp;Shigang Wang,&nbsp;Jian Wei,&nbsp;Yan Zhao,&nbsp;Chenxi song,&nbsp;Rui Zhang","doi":"10.1016/j.displa.2024.102784","DOIUrl":"10.1016/j.displa.2024.102784","url":null,"abstract":"<div><p>Integral imaging display has been widely used because of its features such as full parallax viewing, high comfort without glasses, simple structure and easy implementation. This paper enhances the speed of EIA generation by optimizing the acceleration structure of the ray tracing algorithm. By considering the characteristics of segmental objects captured by the camera during integral image rendering, a novel accelerating structure is constructed by combining BVH with the camera space. The BVH traversal is expanded into a 4-tree based on depth priority order to reduce hierarchy and expedite hit point detection. Additionally, the parameters of the camera array are constrained according to the reconstructed three-dimensional (3D) image range, ensuring optimal object coverage on screen. Experimental results demonstrate that this algorithm reduces ray tracing time for hitting triangle grid of collision objects while automatically determining display range for stereo images and adjusting camera parameters accordingly, thereby maximizing utilization of integrated imaging display resources.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"84 ","pages":"Article 102784"},"PeriodicalIF":3.7,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1