首页 > 最新文献

Visual Informatics最新文献

英文 中文
Time analysis of regional structure of large-scale particle using an interactive visual system 基于交互式视觉系统的大尺度粒子区域结构时间分析
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.03.004
Yihan Zhang , Guan Li , Guihua Shan

N-body numerical simulation is an important tool in astronomy. Scientists used this method to simulate the formation of structure of the universe, which is key to understanding how the universe formed. As research on this subject further develops, astronomers require a more precise method that enables expansion of the simulation and an increase in the number of simulation particles. However, retaining all temporal information is infeasible due to a lack of computer storage. In the circumstances, astronomers reserve temporal data at intervals, merging rough and baffling animations of universal evolution. In this study, we propose a deep-learning-assisted interpolation application to analyze the structure formation of the universe. First, we evaluate the feasibility of applying interpolation to generate an animation of the universal evolution through an experiment. Then, we demonstrate the superiority of deep convolutional neural network (DCNN) method by comparing its quality and performance with the actual results together with the results generated by other popular interpolation algorithms. In addition, we present PRSVis, an interactive visual analytics system that supports global volume rendering, local area magnification, and temporal animation generation. PRSVis allows users to visualize a global volume rendering, interactively select one cubic region from the rendering and intelligently produce a time-series animation of the high-resolution region using the deep-learning-assisted method. In summary, we propose an interactive visual system, integrated with the DCNN interpolation method that is validated through experiments, to help scientists easily understand the evolution of the particle region structure.

n体数值模拟是天文学研究的重要工具。科学家们用这种方法模拟了宇宙结构的形成,这是理解宇宙如何形成的关键。随着这一课题研究的进一步发展,天文学家需要一种更精确的方法来扩展模拟和增加模拟粒子的数量。然而,由于缺乏计算机存储,保留所有时间信息是不可行的。在这种情况下,天文学家每隔一段时间就保留时间数据,将宇宙演化的粗略和令人困惑的动画合并在一起。在这项研究中,我们提出了一个深度学习辅助插值应用程序来分析宇宙的结构形成。首先,我们通过实验评估了应用插值生成宇宙进化动画的可行性。然后,我们将深度卷积神经网络(DCNN)方法的质量和性能与实际结果以及其他流行的插值算法的结果进行了比较,证明了DCNN方法的优越性。此外,我们提出了PRSVis,这是一个交互式视觉分析系统,支持全局体积渲染,局部区域放大和时间动画生成。PRSVis允许用户可视化全局体渲染,交互式地从渲染中选择一个立方区域,并使用深度学习辅助方法智能地生成高分辨率区域的时间序列动画。综上所述,我们提出了一个交互式视觉系统,结合DCNN插值方法,并通过实验验证,以帮助科学家更容易地理解粒子区域结构的演变。
{"title":"Time analysis of regional structure of large-scale particle using an interactive visual system","authors":"Yihan Zhang ,&nbsp;Guan Li ,&nbsp;Guihua Shan","doi":"10.1016/j.visinf.2022.03.004","DOIUrl":"10.1016/j.visinf.2022.03.004","url":null,"abstract":"<div><p>N-body numerical simulation is an important tool in astronomy. Scientists used this method to simulate the formation of structure of the universe, which is key to understanding how the universe formed. As research on this subject further develops, astronomers require a more precise method that enables expansion of the simulation and an increase in the number of simulation particles. However, retaining all temporal information is infeasible due to a lack of computer storage. In the circumstances, astronomers reserve temporal data at intervals, merging rough and baffling animations of universal evolution. In this study, we propose a deep-learning-assisted interpolation application to analyze the structure formation of the universe. First, we evaluate the feasibility of applying interpolation to generate an animation of the universal evolution through an experiment. Then, we demonstrate the superiority of deep convolutional neural network (DCNN) method by comparing its quality and performance with the actual results together with the results generated by other popular interpolation algorithms. In addition, we present PRSVis, an interactive visual analytics system that supports global volume rendering, local area magnification, and temporal animation generation. PRSVis allows users to visualize a global volume rendering, interactively select one cubic region from the rendering and intelligently produce a time-series animation of the high-resolution region using the deep-learning-assisted method. In summary, we propose an interactive visual system, integrated with the DCNN interpolation method that is validated through experiments, to help scientists easily understand the evolution of the particle region structure.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 14-24"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000171/pdfft?md5=d3e25d7a79a6452e30ca6c3511bd690a&pid=1-s2.0-S2468502X22000171-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123011836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VCNet: A generative model for volume completion VCNet:卷补全的生成模型
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.04.004
Jun Han, Chaoli Wang

We present VCNet, a new deep learning approach for volume completion by synthesizing missing subvolumes. Our solution leverages a generative adversarial network (GAN) that learns to complete volumes using the adversarial and volumetric losses. The core design of VCNet features a dilated residual block and long-term connection. During training, VCNet first randomly masks basic subvolumes (e.g., cuboids, slices) from complete volumes and learns to recover them. Moreover, we design a two-stage algorithm for stabilizing and accelerating network optimization. Once trained, VCNet takes an incomplete volume as input and automatically identifies and fills in the missing subvolumes with high quality. We quantitatively and qualitatively test VCNet with volumetric data sets of various characteristics to demonstrate its effectiveness. We also compare VCNet against a diffusion-based solution and two GAN-based solutions.

我们提出了VCNet,一种新的深度学习方法,通过合成缺失子卷来完成体积。我们的解决方案利用生成对抗网络(GAN)来学习使用对抗和体积损失来完成体积。VCNet的核心设计具有扩展剩余块和长期连接的特点。在训练过程中,VCNet首先从完整的卷中随机屏蔽基本子卷(如长方体、切片),并学习恢复它们。此外,我们还设计了一个稳定和加速网络优化的两阶段算法。经过训练后,VCNet将不完整的卷作为输入,自动识别并高质量地填充缺失的子卷。我们用各种特征的体积数据集对VCNet进行了定量和定性测试,以证明其有效性。我们还将VCNet与基于扩散的解决方案和两种基于gan的解决方案进行了比较。
{"title":"VCNet: A generative model for volume completion","authors":"Jun Han,&nbsp;Chaoli Wang","doi":"10.1016/j.visinf.2022.04.004","DOIUrl":"10.1016/j.visinf.2022.04.004","url":null,"abstract":"<div><p>We present VCNet, a new deep learning approach for volume completion by synthesizing missing subvolumes. Our solution leverages a generative adversarial network (GAN) that learns to complete volumes using the adversarial and volumetric losses. The core design of VCNet features a dilated residual block and long-term connection. During training, VCNet first randomly masks basic subvolumes (e.g., cuboids, slices) from complete volumes and learns to recover them. Moreover, we design a two-stage algorithm for stabilizing and accelerating network optimization. Once trained, VCNet takes an incomplete volume as input and automatically identifies and fills in the missing subvolumes with high quality. We quantitatively and qualitatively test VCNet with volumetric data sets of various characteristics to demonstrate its effectiveness. We also compare VCNet against a diffusion-based solution and two GAN-based solutions.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 62-73"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000213/pdfft?md5=2cafa6586ad2e597b6694ededebdd295&pid=1-s2.0-S2468502X22000213-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127673002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
VETA: Visual eye-tracking analytics for the exploration of gaze patterns and behaviours VETA:用于探索凝视模式和行为的视觉眼动追踪分析
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.02.004
Sarah Goodwin , Arnaud Prouzeau , Ryan Whitelock-Jones , Christophe Hurter , Lee Lawrence , Umair Afzal , Tim Dwyer

Eye tracking is growing in popularity for multiple application areas, yet analysing and exploring the large volume of complex data remains difficult for most users. We present a comprehensive eye tracking visual analytics system to enable the exploration and presentation of eye-tracking data across time and space in an efficient manner. The application allows the user to gain an overview of general patterns and perform deep visual analysis of local gaze exploration. The ability to link directly to the video of the underlying scene allows the visualisation insights to be verified on the fly. The system was motivated by the need to analyse eye-tracking data collected from an ‘in the wild’ study with energy network operators and has been further evaluated via interviews with 14 eye-tracking experts in multiple domains. Results suggest that, thanks to state-of-the-art visualisation techniques and by providing context with videos, our system could enable an improved analysis of eye-tracking data through interactive exploration, facilitating comparison between different participants or conditions, thus enhancing the presentation of complex data analysis to non-experts. This research paper provides four contributions: (1) analysis of a motivational use case demonstrating the need for rich visual-analytics workflow tools for eye-tracking data; (2) a highly dynamic system to visually explore and present complex eye-tracking data; (3) insights from our applied use case evaluation and interviews with experienced users demonstrating the potential for the system and visual analytics for the wider eye-tracking community.

眼动追踪在多个应用领域越来越受欢迎,但对大多数用户来说,分析和探索大量复杂数据仍然很困难。我们提出了一个全面的眼动追踪视觉分析系统,使眼动追踪数据能够跨时间和空间高效地探索和呈现。该应用程序允许用户获得一般模式的概述,并对局部凝视探索进行深入的视觉分析。直接链接到底层场景视频的能力使得可视化见解能够在飞行中得到验证。该系统的动机是需要分析从能源网络运营商的“野外”研究中收集的眼球追踪数据,并通过对14位多个领域的眼球追踪专家的采访进行进一步评估。结果表明,得益于最先进的可视化技术和提供视频背景,我们的系统可以通过交互式探索改进眼动追踪数据的分析,促进不同参与者或条件之间的比较,从而增强对非专家的复杂数据分析的呈现。本研究论文提供了四个贡献:(1)分析了一个动机用例,表明需要丰富的视觉分析工作流工具来处理眼动追踪数据;(2)高动态系统,以视觉方式探索和呈现复杂的眼动追踪数据;(3)从我们的应用用例评估和与经验丰富的用户的访谈中得出的见解,展示了系统和视觉分析在更广泛的眼动追踪社区中的潜力。
{"title":"VETA: Visual eye-tracking analytics for the exploration of gaze patterns and behaviours","authors":"Sarah Goodwin ,&nbsp;Arnaud Prouzeau ,&nbsp;Ryan Whitelock-Jones ,&nbsp;Christophe Hurter ,&nbsp;Lee Lawrence ,&nbsp;Umair Afzal ,&nbsp;Tim Dwyer","doi":"10.1016/j.visinf.2022.02.004","DOIUrl":"10.1016/j.visinf.2022.02.004","url":null,"abstract":"<div><p>Eye tracking is growing in popularity for multiple application areas, yet analysing and exploring the large volume of complex data remains difficult for most users. We present a comprehensive eye tracking visual analytics system to enable the exploration and presentation of eye-tracking data across time and space in an efficient manner. The application allows the user to gain an overview of general patterns and perform deep visual analysis of local gaze exploration. The ability to link directly to the video of the underlying scene allows the visualisation insights to be verified on the fly. The system was motivated by the need to analyse eye-tracking data collected from an ‘in the wild’ study with energy network operators and has been further evaluated via interviews with 14 eye-tracking experts in multiple domains. Results suggest that, thanks to state-of-the-art visualisation techniques and by providing context with videos, our system could enable an improved analysis of eye-tracking data through interactive exploration, facilitating comparison between different participants or conditions, thus enhancing the presentation of complex data analysis to non-experts. This research paper provides four contributions: (1) analysis of a motivational use case demonstrating the need for rich visual-analytics workflow tools for eye-tracking data; (2) a highly dynamic system to visually explore and present complex eye-tracking data; (3) insights from our applied use case evaluation and interviews with experienced users demonstrating the potential for the system and visual analytics for the wider eye-tracking community.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 1-13"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000122/pdfft?md5=61f32cb9f0d63c98d7bd5bb3f5a44b85&pid=1-s2.0-S2468502X22000122-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128667157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Trinary tools for continuously valued binary classifiers 连续值二元分类器的三元工具
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.04.002
Michael Gleicher, Xinyi Yu, Yuheng Chen

Classification methods for binary (yes/no) tasks often produce a continuously valued score. Machine learning practitioners must perform model selection, calibration, discretization, performance assessment, tuning, and fairness assessment. Such tasks involve examining classifier results, typically using summary statistics and manual examination of details. In this paper, we provide an interactive visualization approach to support such continuously-valued classifier examination tasks. Our approach addresses the three phases of these tasks: calibration, operating point selection, and examination. We enhance standard views and introduce task-specific views so that they can be integrated into a multi-view coordination (MVC) system. We build on an existing comparison-based approach, extending it to continuous classifiers by treating the continuous values as trinary (positive, unsure, negative) even if the classifier will not ultimately use the 3-way classification. We provide use cases that demonstrate how our approach enables machine learning practitioners to accomplish key tasks.

二元(是/否)任务的分类方法通常产生连续值得分。机器学习从业者必须进行模型选择、校准、离散化、性能评估、调优和公平性评估。这些任务包括检查分类器结果,通常使用汇总统计和手动检查细节。在本文中,我们提供了一种交互式可视化方法来支持这种连续值分类器检查任务。我们的方法解决了这些任务的三个阶段:校准,操作点选择和检查。我们增强了标准视图,并引入了特定于任务的视图,以便将它们集成到多视图协调(MVC)系统中。我们以现有的基于比较的方法为基础,将其扩展到连续分类器,通过将连续值视为三元(正、不确定、负),即使分类器最终不会使用三向分类。我们提供了用例来演示我们的方法如何使机器学习从业者能够完成关键任务。
{"title":"Trinary tools for continuously valued binary classifiers","authors":"Michael Gleicher,&nbsp;Xinyi Yu,&nbsp;Yuheng Chen","doi":"10.1016/j.visinf.2022.04.002","DOIUrl":"10.1016/j.visinf.2022.04.002","url":null,"abstract":"<div><p>Classification methods for binary (yes/no) tasks often produce a continuously valued score. Machine learning practitioners must perform model selection, calibration, discretization, performance assessment, tuning, and fairness assessment. Such tasks involve examining classifier results, typically using summary statistics and manual examination of details. In this paper, we provide an interactive visualization approach to support such continuously-valued classifier examination tasks. Our approach addresses the three phases of these tasks: calibration, operating point selection, and examination. We enhance standard views and introduce task-specific views so that they can be integrated into a multi-view coordination (MVC) system. We build on an existing comparison-based approach, extending it to continuous classifiers by treating the continuous values as trinary (positive, unsure, negative) even if the classifier will not ultimately use the 3-way classification. We provide use cases that demonstrate how our approach enables machine learning practitioners to accomplish key tasks.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 74-86"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000195/pdfft?md5=6a0480389b3bd0b919007d8d1decc35d&pid=1-s2.0-S2468502X22000195-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85347931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Color and Shape efficiency for outlier detection from automated to user evaluation 颜色和形状的效率异常检测从自动到用户评估
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.03.001
Loann Giovannangeli, Romain Bourqui, Romain Giot, David Auber

The design of efficient representations is well established as a fruitful way to explore and analyze complex or large data. In these representations, data are encoded with various visual attributes depending on the needs of the representation itself. To make coherent design choices about visual attributes, the visual search field proposes guidelines based on the human brain’s perception of features. However, information visualization representations frequently need to depict more data than the amount these guidelines have been validated on. Since, the information visualization community has extended these guidelines to a wider parameter space.

This paper contributes to this theme by extending visual search theories to an information visualization context. We consider a visual search task where subjects are asked to find an unknown outlier in a grid of randomly laid out distractors. Stimuli are defined by color and shape features for the purpose of visually encoding categorical data. The experimental protocol is made of a parameters space reduction step (i.e., sub-sampling) based on a machine learning model, and a user evaluation to validate hypotheses and measure capacity limits. The results show that the major difficulty factor is the number of visual attributes that are used to encode the outlier. When redundantly encoded, the display heterogeneity has no effect on the task. When encoded with one attribute, the difficulty depends on that attribute heterogeneity until its capacity limit (7 for color, 5 for shape) is reached. Finally, when encoded with two attributes simultaneously, performances drop drastically even with minor heterogeneity.

高效表示的设计是探索和分析复杂或大型数据的有效方法。在这些表示中,根据表示本身的需要,用各种视觉属性对数据进行编码。为了对视觉属性做出连贯的设计选择,视觉搜索领域根据人类大脑对特征的感知提出了指导方针。然而,信息可视化表示经常需要描述比这些指导方针已验证的数据量更多的数据。此后,信息可视化社区将这些准则扩展到更广泛的参数空间。本文通过将视觉搜索理论扩展到信息可视化语境,为这一主题做出了贡献。我们考虑了一个视觉搜索任务,在这个任务中,受试者被要求在随机布置的干扰物网格中找到一个未知的异常值。为了在视觉上对分类数据进行编码,刺激被定义为颜色和形状特征。实验方案由基于机器学习模型的参数空间缩减步骤(即子采样)和用户评估组成,以验证假设和测量容量限制。结果表明,主要的困难因素是用于编码离群值的视觉属性的数量。当进行冗余编码时,显示异质性对任务没有影响。当使用一个属性进行编码时,难度取决于该属性的异质性,直到达到其容量限制(颜色为7,形状为5)。最后,当同时使用两个属性编码时,即使异质性很小,性能也会急剧下降。
{"title":"Color and Shape efficiency for outlier detection from automated to user evaluation","authors":"Loann Giovannangeli,&nbsp;Romain Bourqui,&nbsp;Romain Giot,&nbsp;David Auber","doi":"10.1016/j.visinf.2022.03.001","DOIUrl":"10.1016/j.visinf.2022.03.001","url":null,"abstract":"<div><p>The design of efficient representations is well established as a fruitful way to explore and analyze complex or large data. In these representations, data are encoded with various visual attributes depending on the needs of the representation itself. To make coherent design choices about visual attributes, the visual search field proposes guidelines based on the human brain’s perception of features. However, information visualization representations frequently need to depict more data than the amount these guidelines have been validated on. Since, the information visualization community has extended these guidelines to a wider parameter space.</p><p>This paper contributes to this theme by extending visual search theories to an information visualization context. We consider a visual search task where subjects are asked to find an unknown outlier in a grid of randomly laid out distractors. Stimuli are defined by color and shape features for the purpose of visually encoding categorical data. The experimental protocol is made of a parameters space reduction step (<em>i.e.</em>, sub-sampling) based on a machine learning model, and a user evaluation to validate hypotheses and measure capacity limits. The results show that the major difficulty factor is the number of visual attributes that are used to encode the outlier. When redundantly encoded, the display heterogeneity has no effect on the task. When encoded with one attribute, the difficulty depends on that attribute heterogeneity until its capacity limit (7 for color, 5 for shape) is reached. Finally, when encoded with two attributes simultaneously, performances drop drastically even with minor heterogeneity.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 25-40"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000146/pdfft?md5=3a4ee1c7cac8f90eeb5e72a02337dd27&pid=1-s2.0-S2468502X22000146-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124374144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
MDISN: Learning multiscale deformed implicit fields from single images MDISN:从单个图像中学习多尺度变形隐式场
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.03.003
Yujie Wang , Yixin Zhuang , Yunzhe Liu , Baoquan Chen

We present a multiscale deformed implicit surface network (MDISN) to reconstruct 3D objects from single images by adapting the implicit surface of the target object from coarse to fine to the input image. The basic idea is to optimize the implicit surface according to the change of consecutive feature maps from the input image. And with multi-resolution feature maps, the implicit field is refined progressively, such that lower resolutions outline the main object components, and higher resolutions reveal fine-grained geometric details. To better explore the changes in feature maps, we devise a simple field deformation module that receives two consecutive feature maps to refine the implicit field with finer geometric details. Experimental results on both synthetic and real-world datasets demonstrate the superiority of the proposed method compared to state-of-the-art methods.

本文提出了一种多尺度变形隐式表面网络(MDISN),通过将目标物体的隐式表面从粗到细调整到输入图像,从单幅图像重建三维物体。其基本思想是根据输入图像连续特征映射的变化来优化隐式曲面。在多分辨率特征图中,隐式域被逐步细化,低分辨率的隐式域勾勒出目标的主要成分,高分辨率的隐式域显示出细粒度的几何细节。为了更好地探索特征图的变化,我们设计了一个简单的场变形模块,该模块接收两个连续的特征图,以更精细的几何细节来细化隐含的场。在合成和真实数据集上的实验结果表明,与最先进的方法相比,所提出的方法具有优越性。
{"title":"MDISN: Learning multiscale deformed implicit fields from single images","authors":"Yujie Wang ,&nbsp;Yixin Zhuang ,&nbsp;Yunzhe Liu ,&nbsp;Baoquan Chen","doi":"10.1016/j.visinf.2022.03.003","DOIUrl":"10.1016/j.visinf.2022.03.003","url":null,"abstract":"<div><p>We present a multiscale deformed implicit surface network (MDISN) to reconstruct 3D objects from single images by adapting the implicit surface of the target object from coarse to fine to the input image. The basic idea is to optimize the implicit surface according to the change of consecutive feature maps from the input image. And with multi-resolution feature maps, the implicit field is refined progressively, such that lower resolutions outline the main object components, and higher resolutions reveal fine-grained geometric details. To better explore the changes in feature maps, we devise a simple field deformation module that receives two consecutive feature maps to refine the implicit field with finer geometric details. Experimental results on both synthetic and real-world datasets demonstrate the superiority of the proposed method compared to state-of-the-art methods.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 41-49"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X2200016X/pdfft?md5=7a2c3ab7456139b67e5be7c06fdac2f5&pid=1-s2.0-S2468502X2200016X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122912047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A machine learning approach for predicting human shortest path task performance 预测人类最短路径任务性能的机器学习方法
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.04.001
Shijun Cai , Seok-Hee Hong , Xiaobo Xia , Tongliang Liu , Weidong Huang

Finding a shortest path for a given pair of vertices in a graph drawing is one of the fundamental tasks for qualitative evaluation of graph drawings. In this paper, we present the first machine learning approach to predict human shortest path task performance, including accuracy, response time, and mental effort.

To predict the shortest path task performance, we utilize correlated quality metrics and the ground truth data from the shortest path experiments. Specifically, we introduce path faithfulness metrics and show strong correlations with the shortest path task performance. Moreover, to mitigate the problem of insufficient ground truth training data, we use the transfer learning method to pre-train our deep model, exploiting the correlated quality metrics.

Experimental results using the ground truth human shortest path experiment data show that our models can successfully predict the shortest path task performance. In particular, model MSP achieves an MSE (i.e., test mean square error) of 0.7243 (i.e., data range from −17.27 to 1.81) for prediction.

图中给定顶点对的最短路径求解是图的定性评价的基本任务之一。在本文中,我们提出了第一个预测人类最短路径任务性能的机器学习方法,包括准确性、响应时间和脑力劳动。为了预测最短路径任务的性能,我们利用相关的质量指标和最短路径实验的真实数据。具体来说,我们引入了路径忠实度指标,并显示了与最短路径任务性能的强相关性。此外,为了缓解地面真值训练数据不足的问题,我们使用迁移学习方法来预训练我们的深度模型,利用相关的质量指标。使用地面真实人类最短路径实验数据的实验结果表明,我们的模型可以成功地预测最短路径任务的性能。特别是,模型MSP的预测MSE(即检验均方误差)为0.7243(即数据范围为- 17.27至1.81)。
{"title":"A machine learning approach for predicting human shortest path task performance","authors":"Shijun Cai ,&nbsp;Seok-Hee Hong ,&nbsp;Xiaobo Xia ,&nbsp;Tongliang Liu ,&nbsp;Weidong Huang","doi":"10.1016/j.visinf.2022.04.001","DOIUrl":"10.1016/j.visinf.2022.04.001","url":null,"abstract":"<div><p>Finding a shortest path for a given pair of vertices in a graph drawing is one of the fundamental tasks for qualitative evaluation of graph drawings. In this paper, we present the first machine learning approach to predict human shortest path task performance, including accuracy, response time, and mental effort.</p><p>To predict the shortest path task performance, we utilize correlated quality metrics and the ground truth data from the shortest path experiments. Specifically, we introduce <em>path faithfulness metrics</em> and show strong correlations with the shortest path task performance. Moreover, to mitigate the problem of insufficient ground truth training data, we use the transfer learning method to pre-train our deep model, exploiting the correlated quality metrics.</p><p>Experimental results using the ground truth human shortest path experiment data show that our models can successfully predict the shortest path task performance. In particular, model MSP achieves an MSE (i.e., test mean square error) of 0.7243 (i.e., data range from −17.27 to 1.81) for prediction.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 2","pages":"Pages 50-61"},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000183/pdfft?md5=8b220940e42fe9792587af3d422a3e28&pid=1-s2.0-S2468502X22000183-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115328161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perspectives of visualization onboarding and guidance in VA 可视化入职和VA指导的观点
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2022.02.005
Christina Stoiber , Davide Ceneda , Markus Wagner , Victor Schetinger , Theresia Gschwandtner , Marc Streit , Silvia Miksch , Wolfgang Aigner

A typical problem in Visual Analytics (VA) is that users are highly trained experts in their application domains, but have mostly no experience in using VA systems. Thus, users often have difficulties interpreting and working with visual representations. To overcome these problems, user assistance can be incorporated into VA systems to guide experts through the analysis while closing their knowledge gaps. Different types of user assistance can be applied to extend the power of VA, enhance the user’s experience, and broaden the audience for VA. Although different approaches to visualization onboarding and guidance in VA already exist, there is a lack of research on how to design and integrate them in effective and efficient ways. Therefore, we aim at putting together the pieces of the mosaic to form a coherent whole. Based on the Knowledge-Assisted Visual Analytics model, we contribute a conceptual model of user assistance for VA by integrating the process of visualization onboarding and guidance as the two main approaches in this direction. As a result, we clarify and discuss the commonalities and differences between visualization onboarding and guidance, and discuss how they benefit from the integration of knowledge extraction and exploration. Finally, we discuss our descriptive model by applying it to VA tools integrating visualization onboarding and guidance, and showing how they should be utilized in different phases of the analysis in order to be effective and accepted by the user.

Visual Analytics (VA)中的一个典型问题是,用户在他们的应用领域中是训练有素的专家,但大多没有使用VA系统的经验。因此,用户经常在解释和处理视觉表示时遇到困难。为了克服这些问题,可以将用户协助纳入VA系统,以指导专家进行分析,同时缩小他们的知识差距。不同类型的用户辅助可以扩展虚拟现实的力量,增强用户体验,扩大虚拟现实的受众群体。虽然在虚拟现实中已有不同的可视化登录和指导方法,但如何有效、高效地设计和整合这些方法的研究还很缺乏。因此,我们的目标是将马赛克的各个部分组合在一起,形成一个连贯的整体。在知识辅助视觉分析模型的基础上,我们通过集成可视化登录和指导过程作为这一方向的两种主要方法,为VA提供了一个用户辅助的概念模型。因此,我们澄清并讨论了可视化入职和指导之间的共性和差异,并讨论了它们如何从知识提取和探索的集成中受益。最后,我们讨论了我们的描述性模型,将其应用于集成可视化登录和指导的VA工具,并展示了如何在分析的不同阶段使用它们,以使其有效并被用户接受。
{"title":"Perspectives of visualization onboarding and guidance in VA","authors":"Christina Stoiber ,&nbsp;Davide Ceneda ,&nbsp;Markus Wagner ,&nbsp;Victor Schetinger ,&nbsp;Theresia Gschwandtner ,&nbsp;Marc Streit ,&nbsp;Silvia Miksch ,&nbsp;Wolfgang Aigner","doi":"10.1016/j.visinf.2022.02.005","DOIUrl":"10.1016/j.visinf.2022.02.005","url":null,"abstract":"<div><p>A typical problem in Visual Analytics (VA) is that users are highly trained experts in their application domains, but have mostly no experience in using VA systems. Thus, users often have difficulties interpreting and working with visual representations. To overcome these problems, user assistance can be incorporated into VA systems to guide experts through the analysis while closing their knowledge gaps. Different types of user assistance can be applied to extend the power of VA, enhance the user’s experience, and broaden the audience for VA. Although different approaches to visualization onboarding and guidance in VA already exist, there is a lack of research on how to design and integrate them in effective and efficient ways. Therefore, we aim at putting together the pieces of the mosaic to form a coherent whole. Based on the Knowledge-Assisted Visual Analytics model, we contribute a conceptual model of user assistance for VA by integrating the process of visualization onboarding and guidance as the two main approaches in this direction. As a result, we clarify and discuss the commonalities and differences between visualization onboarding and guidance, and discuss how they benefit from the integration of knowledge extraction and exploration. Finally, we discuss our descriptive model by applying it to VA tools integrating visualization onboarding and guidance, and showing how they should be utilized in different phases of the analysis in order to be effective and accepted by the user.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 1","pages":"Pages 68-83"},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000134/pdfft?md5=97576331780f4f0a3f95026d4dff62bd&pid=1-s2.0-S2468502X22000134-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127378808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Computing for Chinese Cultural Heritage 中国文化遗产计算
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2021.12.006
Meng Li , Yun Wang , Ying-Qing Xu

Implementing computational methods for preservation, inheritance, and promotion of Cultural Heritage (CH) has become a research trend across the world since the 1990s. In China, generations of scholars have dedicated themselves to studying the country’s rich CH resources; there are great potential and opportunities in the field of computational research on specific cultural artefacts or artforms. Based on previous works, this paper proposes a systematic framework for Chinese Cultural Heritage Computing that consists of three conceptual levels which are Chinese CH protection and development strategy, computing process, and computable cultural ecosystem. The computing process includes three modules: (1) data acquisition and processing, (2) digital modeling and database construction, and (3) data application and promotion. The modules demonstrate the computing approaches corresponding to different phases of Chinese CH protection and development, from digital preservation and inheritance to presentation and promotion. The computing results can become the basis for the generation of cultural genes and eventually the formation of computable cultural ecosystem Case studies on the Mogao caves in Dunhuang and the art of Guqin, recognized as world’s important tangible and intangible cultural heritage, are carried out to elaborate the computing process and methods within the framework. With continuous advances in data collection, processing, and display technologies, the framework can provide constructive reference for building up future research roadmaps in Chinese CH computing and related fields, for sustainable protection and development of Chinese CH in the digital age.

自20世纪90年代以来,将计算方法应用于文化遗产的保护、传承和推广已成为世界范围内的研究趋势。在中国,一代又一代的学者致力于研究这个国家丰富的CH资源;在特定的文化文物或艺术形式的计算研究领域有很大的潜力和机会。本文在前人研究的基础上,提出了中国文化遗产计算的系统框架,该框架包括中国文化遗产保护与发展战略、计算过程和可计算文化生态系统三个概念层面。计算过程包括三个模块:(1)数据采集与处理,(2)数字建模与数据库构建,(3)数据应用与推广。这些模块展示了中国文物保护和发展的不同阶段,从数字保存和传承到展示和推广的计算方法。计算结果可以成为生成文化基因的基础,最终形成可计算的文化生态系统。本文以世界重要物质和非物质文化遗产敦煌莫高窟和古琴艺术为例,阐述了在该框架下的计算过程和方法。随着数据采集、处理和显示技术的不断进步,该框架可以为构建未来汉语汉语计算及相关领域的研究路线图提供建设性的参考,为汉语汉语在数字时代的可持续保护和发展提供参考。
{"title":"Computing for Chinese Cultural Heritage","authors":"Meng Li ,&nbsp;Yun Wang ,&nbsp;Ying-Qing Xu","doi":"10.1016/j.visinf.2021.12.006","DOIUrl":"10.1016/j.visinf.2021.12.006","url":null,"abstract":"<div><p>Implementing computational methods for preservation, inheritance, and promotion of Cultural Heritage (CH) has become a research trend across the world since the 1990s. In China, generations of scholars have dedicated themselves to studying the country’s rich CH resources; there are great potential and opportunities in the field of computational research on specific cultural artefacts or artforms. Based on previous works, this paper proposes a systematic framework for Chinese Cultural Heritage Computing that consists of three conceptual levels which are Chinese CH protection and development strategy, computing process, and computable cultural ecosystem. The computing process includes three modules: (1) data acquisition and processing, (2) digital modeling and database construction, and (3) data application and promotion. The modules demonstrate the computing approaches corresponding to different phases of Chinese CH protection and development, from digital preservation and inheritance to presentation and promotion. The computing results can become the basis for the generation of cultural genes and eventually the formation of computable cultural ecosystem Case studies on the Mogao caves in Dunhuang and the art of Guqin, recognized as world’s important tangible and intangible cultural heritage, are carried out to elaborate the computing process and methods within the framework. With continuous advances in data collection, processing, and display technologies, the framework can provide constructive reference for building up future research roadmaps in Chinese CH computing and related fields, for sustainable protection and development of Chinese CH in the digital age.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 1","pages":"Pages 1-13"},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X21000644/pdfft?md5=2fe78f965cb3cdd3953c49170f0417be&pid=1-s2.0-S2468502X21000644-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132932782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Reconfiguration of the brain during aesthetic experience on Chinese calligraphy—Using brain complex networks 中国书法审美体验中的脑重构——利用脑复杂网络
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2022.02.002
Rui Li , Xiaofei Jia , Changle Zhou , Junsong Zhang

Chinese calligraphy, as a well-known performing art form, occupies an important role in the intangible cultural heritage of China. Previous studies focused on the psychophysiological benefits of Chinese calligraphy. Little attention has been paid to its aesthetic attributes and effectiveness on the cognitive process. To complement our understanding of Chinese calligraphy, this study investigated the aesthetic experience of Chinese cursive-style calligraphy using brain functional network analysis. Subjects stayed on the coach and rested for several minutes. Then, they were requested to appreciate artwork of cursive-style calligraphy. Results showed that (1) changes in functional connectivity between fronto-occipital, fronto-parietal, bilateral parietal, and central–occipital areas are prominent for calligraphy condition, (2) brain functional network showed an increased normalized cluster coefficient for calligraphy condition in alpha2 and gamma bands. These results demonstrate that the brain functional network undergoes a dynamic reconfiguration during the aesthetic experience of Chinese calligraphy. Providing evidence that the aesthetic experience of Chinese calligraphy has several similarities with western art while retaining its unique characters as an eastern traditional art form.

中国书法作为一种著名的表演艺术形式,在中国非物质文化遗产中占有重要地位。以前的研究主要集中在中国书法的心理生理益处上。对其审美属性和对认知过程的影响却鲜有关注。为了补充我们对中国书法的认识,本研究利用脑功能网络分析来研究中国草书书法的审美体验。受试者在马车上休息几分钟。然后,他们被要求欣赏草书风格的书法艺术品。结果表明:(1)书法条件下大脑额枕区、额顶叶区、双侧顶叶区和中枕区功能连通性的变化显著;(2)书法条件下大脑功能网络在α 2和γ波段的归一化聚类系数增加。这些结果表明,在中国书法审美体验过程中,脑功能网络发生了动态重构。提供证据表明,中国书法的审美体验与西方艺术有许多相似之处,同时保留了其作为东方传统艺术形式的独特特征。
{"title":"Reconfiguration of the brain during aesthetic experience on Chinese calligraphy—Using brain complex networks","authors":"Rui Li ,&nbsp;Xiaofei Jia ,&nbsp;Changle Zhou ,&nbsp;Junsong Zhang","doi":"10.1016/j.visinf.2022.02.002","DOIUrl":"10.1016/j.visinf.2022.02.002","url":null,"abstract":"<div><p>Chinese calligraphy, as a well-known performing art form, occupies an important role in the intangible cultural heritage of China. Previous studies focused on the psychophysiological benefits of Chinese calligraphy. Little attention has been paid to its aesthetic attributes and effectiveness on the cognitive process. To complement our understanding of Chinese calligraphy, this study investigated the aesthetic experience of Chinese cursive-style calligraphy using brain functional network analysis. Subjects stayed on the coach and rested for several minutes. Then, they were requested to appreciate artwork of cursive-style calligraphy. Results showed that (1) changes in functional connectivity between fronto-occipital, fronto-parietal, bilateral parietal, and central–occipital areas are prominent for calligraphy condition, (2) brain functional network showed an increased normalized cluster coefficient for calligraphy condition in alpha2 and gamma bands. These results demonstrate that the brain functional network undergoes a dynamic reconfiguration during the aesthetic experience of Chinese calligraphy. Providing evidence that the aesthetic experience of Chinese calligraphy has several similarities with western art while retaining its unique characters as an eastern traditional art form.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 1","pages":"Pages 35-46"},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000109/pdfft?md5=2fa49e9936c3269ce56e4e50f14e1166&pid=1-s2.0-S2468502X22000109-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129049644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Visual Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1