首页 > 最新文献

Displays最新文献

英文 中文
Skeuomorphic or flat? The effects of icon style on visual search and recognition performance 天马行空还是平面化?图标风格对视觉搜索和识别性能的影响
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-17 DOI: 10.1016/j.displa.2024.102813

Although there have been many previous studies on icon visual search and recognition performance, only a few have considered the effects of both the internal and external characteristics of icons. In this behavioral study, we employed a visual search task and a semantic recognition task to explore the effects of icon style, semantic distance (SD), and task difficulty on users’ performance in perceiving and identifying icons. First, we created and filtered 64 new icons, which were divided into four different groups (flat design & close SD, flat design & far SD, skeuomorphic design & close SD, skeuomorphic design & far SD) through expert evaluation. A total of 40 participants (13 men and 27 women, ages ranging from 19 to 25 years, mean age = 21.9 years, SD=1.93) were asked to perform an icon visual search task and an icon recognition task after a round of learning. Participants’ accuracy and response time were measured as a function of the following independent variables: two icon styles (flat or skeuomorphic style), two levels of SD (close or far), and two levels of task difficulty (easy or difficult). The results showed that flat icons had better visual search performance than skeuomorphic icons; this beneficial effect increased as the task difficulty increased. However, in the icon recognition task, participants’ performance in recalling skeuomorphic icons was significantly better than that in recalling flat icons. Furthermore, a strong interaction effect between icon style and task difficulty was observed for response time. As the task difficulty decreased, the difference in recognition performance between these two different icon styles increased significantly. These findings provide valuable guidance for the design of icons in human–computer interaction interfaces.

尽管以前有很多关于图标视觉搜索和识别性能的研究,但只有少数研究考虑了图标内部和外部特征的影响。在这项行为研究中,我们采用了视觉搜索任务和语义识别任务来探讨图标风格、语义距离(SD)和任务难度对用户感知和识别图标表现的影响。首先,我们创建并筛选了64个新图标,并通过专家评估将其分为四组(扁平设计& close SD、扁平设计& far SD、skeuomorphic design & close SD、skeuomorphic design & far SD)。共有 40 名参与者(13 名男性和 27 名女性,年龄在 19 至 25 岁之间,平均年龄 = 21.9 岁,SD=1.93)被要求在一轮学习后完成图标视觉搜索任务和图标识别任务。参与者的准确率和反应时间是由以下自变量决定的:两种图标风格(扁平或偏斜风格)、两种标距水平(近或远)和两种任务难度(易或难)。结果表明,扁平图标的视觉搜索性能优于斜体图标;随着任务难度的增加,这种有利影响也在增加。然而,在图标识别任务中,参与者回忆斜体图标的表现明显优于回忆扁平图标的表现。此外,在反应时间方面,图标风格与任务难度之间存在强烈的交互效应。随着任务难度的降低,这两种不同图标风格之间的识别成绩差异明显增大。这些发现为人机交互界面中的图标设计提供了宝贵的指导。
{"title":"Skeuomorphic or flat? The effects of icon style on visual search and recognition performance","authors":"","doi":"10.1016/j.displa.2024.102813","DOIUrl":"10.1016/j.displa.2024.102813","url":null,"abstract":"<div><p>Although there have been many previous studies on icon visual search and recognition performance, only a few have considered the effects of both the internal and external characteristics of icons. In this behavioral study, we employed a visual search task and a semantic recognition task to explore the effects of icon style, semantic distance (SD), and task difficulty on users’ performance in perceiving and identifying icons. First, we created and filtered 64 new icons, which were divided into four different groups (flat design &amp; close SD, flat design &amp; far SD, skeuomorphic design &amp; close SD, skeuomorphic design &amp; far SD) through expert evaluation. A total of 40 participants (13 men and 27 women, ages ranging from 19 to 25 years, mean age = 21.9 years, SD=1.93) were asked to perform an icon visual search task and an icon recognition task after a round of learning. Participants’ accuracy and response time were measured as a function of the following independent variables: two icon styles (flat or skeuomorphic style), two levels of SD (close or far), and two levels of task difficulty (easy or difficult). The results showed that flat icons had better visual search performance than skeuomorphic icons; this beneficial effect increased as the task difficulty increased. However, in the icon recognition task, participants’ performance in recalling skeuomorphic icons was significantly better than that in recalling flat icons. Furthermore, a strong interaction effect between icon style and task difficulty was observed for response time. As the task difficulty decreased, the difference in recognition performance between these two different icon styles increased significantly. These findings provide valuable guidance for the design of icons in human–computer interaction interfaces.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142021071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive geometry editing of Neural Radiance Fields 神经辐射场的交互式几何编辑
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-13 DOI: 10.1016/j.displa.2024.102810

Neural Radiance Fields (NeRF) have recently emerged as a promising approach for synthesizing highly realistic images from 3D scenes. This technology has shown impressive results in capturing intricate details and producing photorealistic renderings. However, one of the limitations of traditional NeRF approaches is the difficulty in editing and manipulating the geometry of the scene once it has been captured. This restriction hinders creative freedom and practical applicability.

In this paper, we propose a method that enables interactive geometry editing for neural radiance fields manipulation. We use two proxy cages (inner cage and outer cage) to edit a scene. The inner cage defines the operation target, and the outer cage defines the adjustment space. Various operations apply to the two cages. After cage selection, operations on the inner cage lead to the desired transformation of the inner cage and adjustment of the outer cage. Users can edit the scene with translation, rotation, scaling, or combinations. The operations on the corners and edges of the cage are also supported. Our method does not need any explicit 3D geometry representations. The interactive geometry editing applies directly to the implicit neural radiance fields. Extensive experimental results demonstrate the effectiveness of our approach.

神经辐射场(NeRF)是最近出现的一种从三维场景合成高度逼真图像的有效方法。这项技术在捕捉复杂细节和制作逼真渲染效果方面取得了令人印象深刻的成果。然而,传统 NeRF 方法的局限之一是,一旦捕捉到场景的几何图形,就很难对其进行编辑和处理。在本文中,我们提出了一种可对神经辐射场进行交互式几何编辑的方法。我们使用两个代理笼(内笼和外笼)来编辑场景。内笼定义操作目标,外笼定义调整空间。各种操作都适用于这两个笼子。选择笼子后,对内笼子的操作将导致内笼子的预期变换和外笼子的调整。用户可以对场景进行平移、旋转、缩放或组合编辑。此外,还支持对笼子的角落和边缘进行操作。我们的方法不需要任何明确的 3D 几何图形表示。交互式几何编辑直接应用于隐式神经辐射场。大量实验结果证明了我们方法的有效性。
{"title":"Interactive geometry editing of Neural Radiance Fields","authors":"","doi":"10.1016/j.displa.2024.102810","DOIUrl":"10.1016/j.displa.2024.102810","url":null,"abstract":"<div><p>Neural Radiance Fields (NeRF) have recently emerged as a promising approach for synthesizing highly realistic images from 3D scenes. This technology has shown impressive results in capturing intricate details and producing photorealistic renderings. However, one of the limitations of traditional NeRF approaches is the difficulty in editing and manipulating the geometry of the scene once it has been captured. This restriction hinders creative freedom and practical applicability.</p><p>In this paper, we propose a method that enables interactive geometry editing for neural radiance fields manipulation. We use two proxy cages (inner cage and outer cage) to edit a scene. The inner cage defines the operation target, and the outer cage defines the adjustment space. Various operations apply to the two cages. After cage selection, operations on the inner cage lead to the desired transformation of the inner cage and adjustment of the outer cage. Users can edit the scene with translation, rotation, scaling, or combinations. The operations on the corners and edges of the cage are also supported. Our method does not need any explicit 3D geometry representations. The interactive geometry editing applies directly to the implicit neural radiance fields. Extensive experimental results demonstrate the effectiveness of our approach.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141978908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards benchmarking VR sickness: A novel methodological framework for assessing contributing factors and mitigation strategies through rapid VR sickness induction and recovery 制定 VR 病症基准:通过快速 VR 病症诱导和恢复评估诱因和缓解策略的新方法框架
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-13 DOI: 10.1016/j.displa.2024.102807

Virtual Reality (VR) sickness remains a significant challenge in the widespread adoption of VR technologies. The absence of a standardized benchmark system hinders progress in understanding and effectively countering VR sickness. This paper proposes an initial step towards a benchmark system, utilizing a novel methodological framework to serve as a common platform for evaluating contributing VR sickness factors and mitigation strategies. Our benchmark, grounded in established theories and leveraging existing research, features both small and large environments. In two research studies, we validated our system by demonstrating its capability to (1) quickly, reliably, and controllably induce VR sickness in both environments, followed by a rapid decline post-stimulus, facilitating cost and time-effective within-subject studies and increased statistical power, (2) integrate and evaluate established VR sickness mitigation methods — static and dynamic field of view reduction, blur, and virtual nose — demonstrating their effectiveness in reducing symptoms in the benchmark and their direct comparison within a standardized setting. Our proposed benchmark also enables broader, more comparative research into different technical, setup, and participant variables influencing VR sickness and overall user experience, ultimately paving the way for building a comprehensive database to identify the most effective strategies for specific VR applications.

虚拟现实(VR)病仍然是广泛采用 VR 技术的一个重大挑战。标准化基准系统的缺失阻碍了在理解和有效应对 VR 病症方面取得进展。本文提出了建立基准系统的第一步,利用一个新颖的方法框架,作为评估导致 VR 病症的因素和缓解策略的通用平台。我们的基准系统以既有理论为基础,利用现有研究,同时具有小型和大型环境的特点。在两项研究中,我们验证了我们的系统,证明其有能力(1)在两种环境中快速、可靠、可控地诱发 VR 晕眩,并在刺激后迅速缓解,从而促进成本和时间效益高的受试者内研究,并提高统计能力,(2)整合和评估既定的 VR 晕眩缓解方法--静态和动态视野缩小、模糊和虚拟鼻子--证明其在基准中减少症状的有效性,并在标准化设置中进行直接比较。我们提出的基准还有助于对影响 VR 病症和整体用户体验的不同技术、设置和参与者变量进行更广泛、更具可比性的研究,最终为建立一个全面的数据库以确定针对特定 VR 应用的最有效策略铺平道路。
{"title":"Towards benchmarking VR sickness: A novel methodological framework for assessing contributing factors and mitigation strategies through rapid VR sickness induction and recovery","authors":"","doi":"10.1016/j.displa.2024.102807","DOIUrl":"10.1016/j.displa.2024.102807","url":null,"abstract":"<div><p>Virtual Reality (VR) sickness remains a significant challenge in the widespread adoption of VR technologies. The absence of a standardized benchmark system hinders progress in understanding and effectively countering VR sickness. This paper proposes an initial step towards a benchmark system, utilizing a novel methodological framework to serve as a common platform for evaluating contributing VR sickness factors and mitigation strategies. Our benchmark, grounded in established theories and leveraging existing research, features both small and large environments. In two research studies, we validated our system by demonstrating its capability to (1) quickly, reliably, and controllably induce VR sickness in both environments, followed by a rapid decline post-stimulus, facilitating cost and time-effective within-subject studies and increased statistical power, (2) integrate and evaluate established VR sickness mitigation methods — static and dynamic field of view reduction, blur, and virtual nose — demonstrating their effectiveness in reducing symptoms in the benchmark and their direct comparison within a standardized setting. Our proposed benchmark also enables broader, more comparative research into different technical, setup, and participant variables influencing VR sickness and overall user experience, ultimately paving the way for building a comprehensive database to identify the most effective strategies for specific VR applications.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0141938224001719/pdfft?md5=2e64eaeb33beb05d2ed088ab7163143d&pid=1-s2.0-S0141938224001719-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A feature fusion module based on complementary attention for medical image segmentation 基于互补注意力的医学图像分割特征融合模块
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-10 DOI: 10.1016/j.displa.2024.102811

Automated segmentation algorithms are a crucial component of medical image analysis, playing an essential role in assisting professionals to achieve accurate diagnoses. Traditional convolutional neural networks (CNNs) face challenges when dealing with complex and variable lesions: limited by the receptive field of convolutional operators, CNNs often struggle to capture long-range dependencies of complex lesions. The transformer’s outstanding ability to capture long-range dependencies offers a new perspective on addressing these challenges. Inspired by this, our research aims to combine the precise spatial detail extraction capabilities of CNNs with the global semantic understanding abilities of transformers. Unlike traditional fusion methods, we propose a fine-grained feature fusion strategy based on complementary attention, deeply exploring and complementarily fusing the feature representations of the encoder. Moreover, considering that merely relying on feature fusion might overlook critical texture details and key edge features in the segmentation task, we designed a feature enhancement module based on information entropy. This module emphasizes shallow texture features and edge information, enabling the model to more accurately capture and enhance multi-level details of the image, further optimizing segmentation results. Our method was tested on multiple public segmentation datasets of polyps and skin lesions,and performed better than state-of-the-art methods. Extensive qualitative experimental results indicate that our method maintains robust performance even when faced with challenging cases of narrow or blurry-boundary lesions.

自动分割算法是医学图像分析的重要组成部分,在帮助专业人员实现准确诊断方面发挥着至关重要的作用。传统的卷积神经网络(CNN)在处理复杂多变的病变时面临挑战:受限于卷积算子的感受野,CNN 通常难以捕捉复杂病变的长程依赖关系。变换器捕捉长程依赖关系的出色能力为应对这些挑战提供了新的视角。受此启发,我们的研究旨在将 CNN 的精确空间细节提取能力与变换器的全局语义理解能力相结合。与传统的融合方法不同,我们提出了一种基于互补关注的细粒度特征融合策略,深入探索并互补融合编码器的特征表征。此外,考虑到仅仅依靠特征融合可能会忽略分割任务中的关键纹理细节和关键边缘特征,我们设计了一个基于信息熵的特征增强模块。该模块强调浅层纹理特征和边缘信息,使模型能够更准确地捕捉和增强图像的多层次细节,进一步优化分割结果。我们的方法在多个公开的息肉和皮肤病变分割数据集上进行了测试,其表现优于最先进的方法。广泛的定性实验结果表明,即使面对病变边界狭窄或模糊的挑战情况,我们的方法也能保持稳健的性能。
{"title":"A feature fusion module based on complementary attention for medical image segmentation","authors":"","doi":"10.1016/j.displa.2024.102811","DOIUrl":"10.1016/j.displa.2024.102811","url":null,"abstract":"<div><p>Automated segmentation algorithms are a crucial component of medical image analysis, playing an essential role in assisting professionals to achieve accurate diagnoses. Traditional convolutional neural networks (CNNs) face challenges when dealing with complex and variable lesions: limited by the receptive field of convolutional operators, CNNs often struggle to capture long-range dependencies of complex lesions. The transformer’s outstanding ability to capture long-range dependencies offers a new perspective on addressing these challenges. Inspired by this, our research aims to combine the precise spatial detail extraction capabilities of CNNs with the global semantic understanding abilities of transformers. Unlike traditional fusion methods, we propose a fine-grained feature fusion strategy based on complementary attention, deeply exploring and complementarily fusing the feature representations of the encoder. Moreover, considering that merely relying on feature fusion might overlook critical texture details and key edge features in the segmentation task, we designed a feature enhancement module based on information entropy. This module emphasizes shallow texture features and edge information, enabling the model to more accurately capture and enhance multi-level details of the image, further optimizing segmentation results. Our method was tested on multiple public segmentation datasets of polyps and skin lesions,and performed better than state-of-the-art methods. Extensive qualitative experimental results indicate that our method maintains robust performance even when faced with challenging cases of narrow or blurry-boundary lesions.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141997547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation and application strategy of low blue light mode of desktop display based on brightness characteristics 基于亮度特性的台式显示器低蓝光模式评估与应用策略
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-10 DOI: 10.1016/j.displa.2024.102809

Long-term use of desktop displays may increase the burden of the visual system and users can use low blue light mode for eye protection in terms of circadian effect. In this work, we investigated its influence from two aspects of brightness-visual effect, namely efficacy and circadian effect, and color quality, namely color difference Δu’v’ (chromaticity coordinate offset of two colors), and Duv (deviation from blackbody locus). The decrease of brightness is accompanied by the increase of efficacy while diminishing circadian effect. The blue, cyan, and magenta have the largest Δu’v’, and the lower the saturation, the greater the Δu’v’. The lower the correlated color temperature (CCT), the greater the Duv and the farther it deviates from the Planckian locus. We summarize three low blue light mode adjustment strategies based on red, green, and blue three-channel ratio of spectrum, and propose an optimized mode using genetic algorithm, which has two optional CCT ranges of 3500–5000 K and 2700–3000 K. Furthermore, we establish the relationship between brightness and gamut coverage to refine the screen brightness range for low blue light mode. This research provides valuable insights into low blue light mode application and their implications for human-centric healthy displays.

长期使用台式显示器可能会增加视觉系统的负担,从昼夜节律效应的角度考虑,用户可以使用低蓝光模式来保护眼睛。在这项工作中,我们从亮度的两个方面--视觉效果(即功效和昼夜节律效应)和色彩质量(即色差Δu'v'(两种颜色的色度坐标偏移)和 Duv(偏离黑体位置))--研究了其影响。亮度的降低伴随着功效的提高,同时昼夜效应也会减弱。蓝色、青色和品红色的Δu'v'最大,饱和度越低,Δu'v'越大。相关色温(CCT)越低,Duv 越大,偏离普朗克位置越远。我们总结了基于红、绿、蓝三通道光谱比例的三种低蓝光模式调整策略,并利用遗传算法提出了一种优化模式,该模式有 3500-5000 K 和 2700-3000 K 两种可选色温范围。这项研究为低蓝光模式的应用及其对以人为本的健康显示器的影响提供了宝贵的见解。
{"title":"Evaluation and application strategy of low blue light mode of desktop display based on brightness characteristics","authors":"","doi":"10.1016/j.displa.2024.102809","DOIUrl":"10.1016/j.displa.2024.102809","url":null,"abstract":"<div><p>Long-term use of desktop displays may increase the burden of the visual system and users can use low blue light mode for eye protection in terms of circadian effect. In this work, we investigated its influence from two aspects of brightness-visual effect, namely efficacy and circadian effect, and color quality, namely color difference Δu’v’ (chromaticity coordinate offset of two colors), and D<sub>uv</sub> (deviation from blackbody locus). The decrease of brightness is accompanied by the increase of efficacy while diminishing circadian effect. The blue, cyan, and magenta have the largest Δu’v’, and the lower the saturation, the greater the Δu’v’. The lower the correlated color temperature (CCT), the greater the D<sub>uv</sub> and the farther it deviates from the Planckian locus. We summarize three low blue light mode adjustment strategies based on red, green, and blue three-channel ratio of spectrum, and propose an optimized mode using genetic algorithm, which has two optional CCT ranges of 3500–5000 K and 2700–3000 K. Furthermore, we establish the relationship between brightness and gamut coverage to refine the screen brightness range for low blue light mode. This research provides valuable insights into low blue light mode application and their implications for human-centric healthy displays.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142040450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human pose estimation in complex background videos via Transformer-based multi-scale feature integration 通过基于变换器的多尺度特征集成,在复杂背景视频中进行人体姿态估计
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-08 DOI: 10.1016/j.displa.2024.102805

Human posture estimation is still a hot research topic. Previous algorithms based on traditional machine learning have difficulties in feature extraction and low fusion efficiency. To address these problems, we proposed a Transformer-based method. We combined three techniques, namely the Transformer-based feature extraction module, the multi-scale feature fusion module, and the occlusion processing mechanism, to capture the human pose. The Transformer-based feature extraction module uses the self-attention mechanism to extract key features from the input sequence, the multi-scale feature fusion module fuses feature information of different scales to enhance the perception ability of the model, and the occlusion processing mechanism can effectively handle occlusion in the data and effectively remove background interference. Our method has shown excellent performance through verification on the standard dataset Human3.6M and the wild video dataset, achieving accurate pose prediction in both complex actions and challenging samples.

人体姿态估计仍是一个热门研究课题。以往基于传统机器学习的算法存在特征提取困难、融合效率低等问题。针对这些问题,我们提出了一种基于变换器的方法。我们结合了三种技术,即基于变换器的特征提取模块、多尺度特征融合模块和遮挡处理机制,来捕捉人体姿态。基于变换器的特征提取模块利用自注意机制从输入序列中提取关键特征,多尺度特征融合模块融合不同尺度的特征信息以增强模型的感知能力,而遮挡处理机制能有效处理数据中的遮挡并有效去除背景干扰。通过在标准数据集 Human3.6M 和野生视频数据集上的验证,我们的方法表现出了卓越的性能,在复杂动作和高难度样本中都能实现准确的姿势预测。
{"title":"Human pose estimation in complex background videos via Transformer-based multi-scale feature integration","authors":"","doi":"10.1016/j.displa.2024.102805","DOIUrl":"10.1016/j.displa.2024.102805","url":null,"abstract":"<div><p>Human posture estimation is still a hot research topic. Previous algorithms based on traditional machine learning have difficulties in feature extraction and low fusion efficiency. To address these problems, we proposed a Transformer-based method. We combined three techniques, namely the Transformer-based feature extraction module, the multi-scale feature fusion module, and the occlusion processing mechanism, to capture the human pose. The Transformer-based feature extraction module uses the self-attention mechanism to extract key features from the input sequence, the multi-scale feature fusion module fuses feature information of different scales to enhance the perception ability of the model, and the occlusion processing mechanism can effectively handle occlusion in the data and effectively remove background interference. Our method has shown excellent performance through verification on the standard dataset Human3.6M and the wild video dataset, achieving accurate pose prediction in both complex actions and challenging samples.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of low-temperature polycrystalline silicon process and novel 2T2C driving circuits for electric paper 开发用于电纸的低温多晶硅工艺和新型 2T2C 驱动电路
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-08-08 DOI: 10.1016/j.displa.2024.102808

In this work, we systematically investigate low-temperature polycrystalline silicon (LTPS)-based driving circuits of electronic paper for the aim of adopting small width/length ratio (W/L) of LTPS-based thin film transistors (TFTs) to reduce switch error and thus improve image sticking. Firstly, LTPS-TFTs with extremely low off-state leakage current (IOFF) even at a large source-drain voltage (VDS) of 30 V were obtained through detailed explorations of LTPS process technology. Meanwhile, the high on-state current (ION) of LTPS-TFTs also meet the requirements of fast signal writing to the storage capacitor due to their extremely high field-effect mobility (approximately 100 cm2/V⋅s), making it possible to fabricate TFTs with relatively small W/L, thereby minimizing switch error. The ID-VD test results reveal that the produced LTPS-TFTs can effectively withstand the maximum voltage difference of 30 V during product operation. Subsequently, the optimal W/L of the LTPS-TFT was determined through experimental results. Then, reliability test was conducted on the obtained LTPS-TFTs, revealing that the threshold voltage (VTH) of the LTPS-TFTs shifted by 0.08 V after 7200 s under negative bias temperature stress (NBTS), and only by 0.19 V under positive bias temperature stress (PBTS). The aging test results of the aforementioned LTPS-TFTs exhibits a new physical phenomenon, that is, the IOFF of the LTPS-TFTs has a strict matching characteristic with the aging direction. Next, we proposed a novel 2T2C driving circuit for the e-paper, which can effectively avoid the adverse effects of IOFF on the frame holding period, and plotted it into an array layout. Finally, we combined the optimal fabricating process of the LTPS-TFTs with the 2T2C driving circuit design scheme to produce an e-paper with outstanding image sticking performance.

在这项工作中,我们系统地研究了基于低温多晶硅(LTPS)的电子纸驱动电路,目的是采用基于 LTPS 的薄膜晶体管(TFT)的小宽度/长度比(W/L),以减少开关误差,从而改善图像粘性。首先,通过对 LTPS 工艺技术的详细探索,即使在 30 V 的大源漏极电压 (VDS) 下,LTPS-TFT 也能获得极低的离态漏电流 (IOFF)。同时,由于 LTPS-TFT 具有极高的场效应迁移率(约 100 cm2/V⋅s),其高导通电流(ION)也能满足向存储电容器快速写入信号的要求,因此可以制造出相对较小 W/L 的 TFT,从而将开关误差降至最低。ID-VD 测试结果表明,所生产的 LTPS-TFT 在产品运行期间可有效承受 30 V 的最大电压差。随后,通过实验结果确定了 LTPS-TFT 的最佳 W/L。然后,对所制备的 LTPS-TFT 进行了可靠性测试,结果表明在负偏压温度应力(NBTS)作用下,LTPS-TFT 的阈值电压(VTH)在 7200 秒后偏移了 0.08 V,而在正偏压温度应力(PBTS)作用下仅偏移了 0.19 V。上述 LTPS-TFT 的老化测试结果表明了一种新的物理现象,即 LTPS-TFT 的 IOFF 与老化方向具有严格的匹配特性。接着,我们提出了一种新型的 2T2C 电子纸驱动电路,它能有效避免 IOFF 对帧保持期的不利影响,并将其绘制成阵列布局图。最后,我们将 LTPS-TFT 的最佳制造工艺与 2T2C 驱动电路设计方案相结合,制造出了具有出色图像保持性能的电子纸。
{"title":"Development of low-temperature polycrystalline silicon process and novel 2T2C driving circuits for electric paper","authors":"","doi":"10.1016/j.displa.2024.102808","DOIUrl":"10.1016/j.displa.2024.102808","url":null,"abstract":"<div><p>In this work, we systematically investigate low-temperature polycrystalline silicon (LTPS)-based driving circuits of electronic paper for the aim of adopting small width/length ratio (W/L) of LTPS-based thin film transistors (TFTs) to reduce switch error and thus improve image sticking. Firstly, LTPS-TFTs with extremely low off-state leakage current (I<sub>OFF</sub>) even at a large source-drain voltage (V<sub>DS</sub>) of 30 V were obtained through detailed explorations of LTPS process technology. Meanwhile, the high on-state current (I<sub>ON</sub>) of LTPS-TFTs also meet the requirements of fast signal writing to the storage capacitor due to their extremely high field-effect mobility (approximately 100 cm<sup>2</sup>/V⋅s), making it possible to fabricate TFTs with relatively small W/L, thereby minimizing switch error. The I<sub>D</sub>-V<sub>D</sub> test results reveal that the produced LTPS-TFTs can effectively withstand the maximum voltage difference of 30 V during product operation. Subsequently, the optimal W/L of the LTPS-TFT was determined through experimental results. Then, reliability test was conducted on the obtained LTPS-TFTs, revealing that the threshold voltage (V<sub>TH</sub>) of the LTPS-TFTs shifted by 0.08 V after 7200 s under negative bias temperature stress (NBTS), and only by 0.19 V under positive bias temperature stress (PBTS). The aging test results of the aforementioned LTPS-TFTs exhibits a new physical phenomenon, that is, the I<sub>OFF</sub> of the LTPS-TFTs has a strict matching characteristic with the aging direction. Next, we proposed a novel 2T2C driving circuit for the e-paper, which can effectively avoid the adverse effects of I<sub>OFF</sub> on the frame holding period, and plotted it into an array layout. Finally, we combined the optimal fabricating process of the LTPS-TFTs with the 2T2C driving circuit design scheme to produce an e-paper with outstanding image sticking performance.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141978907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Review on SLAM algorithms for Augmented Reality 增强现实 SLAM 算法综述
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-31 DOI: 10.1016/j.displa.2024.102806

Augmented Reality (AR) has gained significant attention in recent years as a technology that enhances the user’s perception and interaction with the real world by overlaying virtual objects. Simultaneous Localization and Mapping (SLAM) algorithm plays a crucial role in enabling AR applications by allowing the device to understand its position and orientation in the real world while mapping the environment. This paper first summarizes AR products and SLAM algorithms in recent years, and presents a comprehensive overview of SLAM algorithms including feature-based method, direct method, and deep learning-based method, highlighting their advantages and limitations. Then provides an in-depth exploration of classical SLAM algorithms for AR, with a focus on visual SLAM and visual-inertial SLAM. Lastly, sensor configuration, datasets, and performance evaluation for AR SLAM are also discussed. The review concludes with a summary of the current state of SLAM algorithms for AR and provides insights into future directions for research and development in this field. Overall, this review serves as a valuable resource for researchers and engineers who are interested in understanding the advancements and challenges in SLAM algorithms for AR.

增强现实(AR)技术通过叠加虚拟对象来增强用户对现实世界的感知和互动,近年来受到了广泛关注。同时定位和映射(SLAM)算法允许设备在映射环境的同时了解自己在现实世界中的位置和方向,在实现 AR 应用方面发挥着至关重要的作用。本文首先总结了近年来的 AR 产品和 SLAM 算法,并全面介绍了 SLAM 算法,包括基于特征的方法、直接方法和基于深度学习的方法,强调了它们的优势和局限性。然后深入探讨了 AR 的经典 SLAM 算法,重点介绍了视觉 SLAM 和视觉-惯性 SLAM。最后,还讨论了 AR SLAM 的传感器配置、数据集和性能评估。综述最后总结了 AR SLAM 算法的现状,并对该领域未来的研发方向提出了见解。总之,对于有兴趣了解 AR SLAM 算法的进展和挑战的研究人员和工程师来说,本综述是一份宝贵的资料。
{"title":"Review on SLAM algorithms for Augmented Reality","authors":"","doi":"10.1016/j.displa.2024.102806","DOIUrl":"10.1016/j.displa.2024.102806","url":null,"abstract":"<div><p>Augmented Reality (AR) has gained significant attention in recent years as a technology that enhances the user’s perception and interaction with the real world by overlaying virtual objects. Simultaneous Localization and Mapping (SLAM) algorithm plays a crucial role in enabling AR applications by allowing the device to understand its position and orientation in the real world while mapping the environment. This paper first summarizes AR products and SLAM algorithms in recent years, and presents a comprehensive overview of SLAM algorithms including feature-based method, direct method, and deep learning-based method, highlighting their advantages and limitations. Then provides an in-depth exploration of classical SLAM algorithms for AR, with a focus on visual SLAM and visual-inertial SLAM. Lastly, sensor configuration, datasets, and performance evaluation for AR SLAM are also discussed. The review concludes with a summary of the current state of SLAM algorithms for AR and provides insights into future directions for research and development in this field. Overall, this review serves as a valuable resource for researchers and engineers who are interested in understanding the advancements and challenges in SLAM algorithms for AR.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-resolution enhanced cross-subspace fusion network for light field image superresolution 用于光场图像超分辨率的高分辨率增强型跨子空间融合网络
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-29 DOI: 10.1016/j.displa.2024.102803

Light field (LF) images offer abundant spatial and angular information, therefore, the combination of which is beneficial in the performance of LF image superresolution (LF image SR). Currently, existing methods often decompose the 4D LF data into low-dimensional subspaces for individual feature extraction and fusion for LF image SR. However, the performance of these methods is restricted because of lacking effective correlations between subspaces and missing out on crucial complementary information for capturing rich texture details. To address this, we propose a cross-subspace fusion network for LF spatial SR (i.e., CSFNet). Specifically, we design the progressive cross-subspace fusion module (PCSFM), which can progressively establish cross-subspace correlations based on a cross-attention mechanism to comprehensively enrich LF information. Additionally, we propose a high-resolution adaptive enhancement group (HR-AEG), which preserves the texture and edge details in the high resolution feature domain by employing a multibranch enhancement method and an adaptive weight strategy. The experimental results demonstrate that our approach achieves highly competitive performance on multiple LF datasets compared to state-of-the-art (SOTA) methods.

光场(LF)图像提供了丰富的空间和角度信息,因此,将这些信息结合起来有利于实现 LF 图像超分辨率(LF 图像 SR)。目前,现有的方法通常将 4D 光场数据分解成低维子空间,用于单独特征提取和光场图像 SR 的融合。然而,这些方法的性能受到限制,因为子空间之间缺乏有效的相关性,无法捕捉到丰富纹理细节的关键互补信息。为此,我们提出了一种用于低频空间 SR 的跨子空间融合网络(即 CSFNet)。具体来说,我们设计了渐进式跨子空间融合模块(PCSFM),它可以基于交叉关注机制逐步建立跨子空间相关性,从而全面丰富低频信息。此外,我们还提出了高分辨率自适应增强组(HR-AEG),通过采用多分支增强方法和自适应权重策略,保留了高分辨率特征域中的纹理和边缘细节。实验结果表明,与最先进的(SOTA)方法相比,我们的方法在多个低频数据集上取得了极具竞争力的性能。
{"title":"High-resolution enhanced cross-subspace fusion network for light field image superresolution","authors":"","doi":"10.1016/j.displa.2024.102803","DOIUrl":"10.1016/j.displa.2024.102803","url":null,"abstract":"<div><p>Light field (LF) images offer abundant spatial and angular information, therefore, the combination of which is beneficial in the performance of LF image superresolution (LF image SR). Currently, existing methods often decompose the 4D LF data into low-dimensional subspaces for individual feature extraction and fusion for LF image SR. However, the performance of these methods is restricted because of lacking effective correlations between subspaces and missing out on crucial complementary information for capturing rich texture details. To address this, we propose a cross-subspace fusion network for LF spatial SR (i.e., CSFNet). Specifically, we design the progressive cross-subspace fusion module (PCSFM), which can progressively establish cross-subspace correlations based on a cross-attention mechanism to comprehensively enrich LF information. Additionally, we propose a high-resolution adaptive enhancement group (HR-AEG), which preserves the texture and edge details in the high resolution feature domain by employing a multibranch enhancement method and an adaptive weight strategy. The experimental results demonstrate that our approach achieves highly competitive performance on multiple LF datasets compared to state-of-the-art (SOTA) methods.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141940368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A dense video caption dataset of student classroom behaviors and a baseline model with boundary semantic awareness 学生课堂行为的密集视频字幕数据集和具有边界语义意识的基线模型
IF 3.7 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Pub Date : 2024-07-26 DOI: 10.1016/j.displa.2024.102804

Dense video captioning automatically locates events in untrimmed videos and describes event contents through natural language. This task has many potential applications, including security, assisting people who are visually impaired, and video retrieval. The related datasets constitute an important foundation for research on data-driven methods. However, the existing models for building dense video caption datasets were designed for the universal domain, often ignoring the characteristics and requirements of a specific domain. In addition, the one-way dataset construction process cannot form a closed-loop iterative scheme to improve the quality of the dataset. Therefore, this paper proposes a novel dataset construction model that is suitable for classroom-specific scenarios. On this basis, the Dense Video Caption Dataset of Student Classroom Behaviors (SCB-DVC) is constructed. Additionally, the existing dense video captioning methods typically utilize only temporal event boundaries as direct supervisory information during localization and fail to consider semantic information. This results in a limited correlation between the localization and captioning stages. This defect makes it more difficult to locate events in videos with oversmooth boundaries (due to the excessive similarity between the foregrounds and backgrounds (temporal domains) of events). Therefore, we propose a fine-grained semantic-aware assisted boundary localization-based dense video captioning method. This method enhances the ability to effectively learn the differential features between the foreground and background of an event by introducing semantic-aware information. It can provide increased boundary perception and achieve more accurate captions. Experimental results show that the proposed method performs well on both the SCB-DVC dataset and public datasets (ActivityNet Captions, YouCook2 and TACoS). We will release the SCB-DVC dataset soon.

密集视频字幕可以自动定位未剪辑视频中的事件,并通过自然语言描述事件内容。这项任务有许多潜在的应用,包括安全、帮助视障人士和视频检索。相关数据集是研究数据驱动方法的重要基础。然而,现有的密集视频字幕数据集构建模型都是针对通用领域设计的,往往忽略了特定领域的特点和要求。此外,单向的数据集构建过程无法形成闭环迭代方案来提高数据集的质量。因此,本文提出了一种适用于教室特定场景的新型数据集构建模型。在此基础上,构建了学生课堂行为密集视频字幕数据集(SCB-DVC)。此外,现有的密集视频字幕方法在定位过程中通常只利用时间事件边界作为直接监督信息,而不考虑语义信息。这导致定位和字幕制作阶段之间的相关性有限。这一缺陷使得在边界过于光滑的视频中定位事件变得更加困难(由于事件的前景和背景(时域)之间的相似性过高)。因此,我们提出了一种基于细粒度语义感知辅助边界定位的密集视频字幕制作方法。该方法通过引入语义感知信息,增强了有效学习事件前景和背景之间差异特征的能力。它能提高边界感知能力,实现更准确的字幕。实验结果表明,所提出的方法在 SCB-DVC 数据集和公共数据集(ActivityNet Captions、YouCook2 和 TACoS)上都表现良好。我们将很快发布 SCB-DVC 数据集。
{"title":"A dense video caption dataset of student classroom behaviors and a baseline model with boundary semantic awareness","authors":"","doi":"10.1016/j.displa.2024.102804","DOIUrl":"10.1016/j.displa.2024.102804","url":null,"abstract":"<div><p>Dense video captioning automatically locates events in untrimmed videos and describes event contents through natural language. This task has many potential applications, including security, assisting people who are visually impaired, and video retrieval. The related datasets constitute an important foundation for research on data-driven methods. However, the existing models for building dense video caption datasets were designed for the universal domain, often ignoring the characteristics and requirements of a specific domain. In addition, the one-way dataset construction process cannot form a closed-loop iterative scheme to improve the quality of the dataset. Therefore, this paper proposes a novel dataset construction model that is suitable for classroom-specific scenarios. On this basis, the Dense Video Caption Dataset of Student Classroom Behaviors (SCB-DVC) is constructed. Additionally, the existing dense video captioning methods typically utilize only temporal event boundaries as direct supervisory information during localization and fail to consider semantic information. This results in a limited correlation between the localization and captioning stages. This defect makes it more difficult to locate events in videos with oversmooth boundaries (due to the excessive similarity between the foregrounds and backgrounds (temporal domains) of events). Therefore, we propose a fine-grained semantic-aware assisted boundary localization-based dense video captioning method. This method enhances the ability to effectively learn the differential features between the foreground and background of an event by introducing semantic-aware information. It can provide increased boundary perception and achieve more accurate captions. Experimental results show that the proposed method performs well on both the SCB-DVC dataset and public datasets (ActivityNet Captions, YouCook2 and TACoS). We will release the SCB-DVC dataset soon.</p></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":null,"pages":null},"PeriodicalIF":3.7,"publicationDate":"2024-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141959363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Displays
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1