首页 > 最新文献

Visual Informatics最新文献

英文 中文
VETA: Visual eye-tracking analytics for the exploration of gaze patterns and behaviours VETA:用于探索凝视模式和行为的视觉眼动追踪分析
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.02.004
Sarah Goodwin , Arnaud Prouzeau , Ryan Whitelock-Jones , Christophe Hurter , Lee Lawrence , Umair Afzal , Tim Dwyer

Eye tracking is growing in popularity for multiple application areas, yet analysing and exploring the large volume of complex data remains difficult for most users. We present a comprehensive eye tracking visual analytics system to enable the exploration and presentation of eye-tracking data across time and space in an efficient manner. The application allows the user to gain an overview of general patterns and perform deep visual analysis of local gaze exploration. The ability to link directly to the video of the underlying scene allows the visualisation insights to be verified on the fly. The system was motivated by the need to analyse eye-tracking data collected from an ‘in the wild’ study with energy network operators and has been further evaluated via interviews with 14 eye-tracking experts in multiple domains. Results suggest that, thanks to state-of-the-art visualisation techniques and by providing context with videos, our system could enable an improved analysis of eye-tracking data through interactive exploration, facilitating comparison between different participants or conditions, thus enhancing the presentation of complex data analysis to non-experts. This research paper provides four contributions: (1) analysis of a motivational use case demonstrating the need for rich visual-analytics workflow tools for eye-tracking data; (2) a highly dynamic system to visually explore and present complex eye-tracking data; (3) insights from our applied use case evaluation and interviews with experienced users demonstrating the potential for the system and visual analytics for the wider eye-tracking community.

眼动追踪在多个应用领域越来越受欢迎,但对大多数用户来说,分析和探索大量复杂数据仍然很困难。我们提出了一个全面的眼动追踪视觉分析系统,使眼动追踪数据能够跨时间和空间高效地探索和呈现。该应用程序允许用户获得一般模式的概述,并对局部凝视探索进行深入的视觉分析。直接链接到底层场景视频的能力使得可视化见解能够在飞行中得到验证。该系统的动机是需要分析从能源网络运营商的“野外”研究中收集的眼球追踪数据,并通过对14位多个领域的眼球追踪专家的采访进行进一步评估。结果表明,得益于最先进的可视化技术和提供视频背景,我们的系统可以通过交互式探索改进眼动追踪数据的分析,促进不同参与者或条件之间的比较,从而增强对非专家的复杂数据分析的呈现。本研究论文提供了四个贡献:(1)分析了一个动机用例,表明需要丰富的视觉分析工作流工具来处理眼动追踪数据;(2)高动态系统,以视觉方式探索和呈现复杂的眼动追踪数据;(3)从我们的应用用例评估和与经验丰富的用户的访谈中得出的见解,展示了系统和视觉分析在更广泛的眼动追踪社区中的潜力。
{"title":"VETA: Visual eye-tracking analytics for the exploration of gaze patterns and behaviours","authors":"Sarah Goodwin ,&nbsp;Arnaud Prouzeau ,&nbsp;Ryan Whitelock-Jones ,&nbsp;Christophe Hurter ,&nbsp;Lee Lawrence ,&nbsp;Umair Afzal ,&nbsp;Tim Dwyer","doi":"10.1016/j.visinf.2022.02.004","DOIUrl":"10.1016/j.visinf.2022.02.004","url":null,"abstract":"<div><p>Eye tracking is growing in popularity for multiple application areas, yet analysing and exploring the large volume of complex data remains difficult for most users. We present a comprehensive eye tracking visual analytics system to enable the exploration and presentation of eye-tracking data across time and space in an efficient manner. The application allows the user to gain an overview of general patterns and perform deep visual analysis of local gaze exploration. The ability to link directly to the video of the underlying scene allows the visualisation insights to be verified on the fly. The system was motivated by the need to analyse eye-tracking data collected from an ‘in the wild’ study with energy network operators and has been further evaluated via interviews with 14 eye-tracking experts in multiple domains. Results suggest that, thanks to state-of-the-art visualisation techniques and by providing context with videos, our system could enable an improved analysis of eye-tracking data through interactive exploration, facilitating comparison between different participants or conditions, thus enhancing the presentation of complex data analysis to non-experts. This research paper provides four contributions: (1) analysis of a motivational use case demonstrating the need for rich visual-analytics workflow tools for eye-tracking data; (2) a highly dynamic system to visually explore and present complex eye-tracking data; (3) insights from our applied use case evaluation and interviews with experienced users demonstrating the potential for the system and visual analytics for the wider eye-tracking community.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000122/pdfft?md5=61f32cb9f0d63c98d7bd5bb3f5a44b85&pid=1-s2.0-S2468502X22000122-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128667157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Color and Shape efficiency for outlier detection from automated to user evaluation 颜色和形状的效率异常检测从自动到用户评估
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.03.001
Loann Giovannangeli, Romain Bourqui, Romain Giot, David Auber

The design of efficient representations is well established as a fruitful way to explore and analyze complex or large data. In these representations, data are encoded with various visual attributes depending on the needs of the representation itself. To make coherent design choices about visual attributes, the visual search field proposes guidelines based on the human brain’s perception of features. However, information visualization representations frequently need to depict more data than the amount these guidelines have been validated on. Since, the information visualization community has extended these guidelines to a wider parameter space.

This paper contributes to this theme by extending visual search theories to an information visualization context. We consider a visual search task where subjects are asked to find an unknown outlier in a grid of randomly laid out distractors. Stimuli are defined by color and shape features for the purpose of visually encoding categorical data. The experimental protocol is made of a parameters space reduction step (i.e., sub-sampling) based on a machine learning model, and a user evaluation to validate hypotheses and measure capacity limits. The results show that the major difficulty factor is the number of visual attributes that are used to encode the outlier. When redundantly encoded, the display heterogeneity has no effect on the task. When encoded with one attribute, the difficulty depends on that attribute heterogeneity until its capacity limit (7 for color, 5 for shape) is reached. Finally, when encoded with two attributes simultaneously, performances drop drastically even with minor heterogeneity.

高效表示的设计是探索和分析复杂或大型数据的有效方法。在这些表示中,根据表示本身的需要,用各种视觉属性对数据进行编码。为了对视觉属性做出连贯的设计选择,视觉搜索领域根据人类大脑对特征的感知提出了指导方针。然而,信息可视化表示经常需要描述比这些指导方针已验证的数据量更多的数据。此后,信息可视化社区将这些准则扩展到更广泛的参数空间。本文通过将视觉搜索理论扩展到信息可视化语境,为这一主题做出了贡献。我们考虑了一个视觉搜索任务,在这个任务中,受试者被要求在随机布置的干扰物网格中找到一个未知的异常值。为了在视觉上对分类数据进行编码,刺激被定义为颜色和形状特征。实验方案由基于机器学习模型的参数空间缩减步骤(即子采样)和用户评估组成,以验证假设和测量容量限制。结果表明,主要的困难因素是用于编码离群值的视觉属性的数量。当进行冗余编码时,显示异质性对任务没有影响。当使用一个属性进行编码时,难度取决于该属性的异质性,直到达到其容量限制(颜色为7,形状为5)。最后,当同时使用两个属性编码时,即使异质性很小,性能也会急剧下降。
{"title":"Color and Shape efficiency for outlier detection from automated to user evaluation","authors":"Loann Giovannangeli,&nbsp;Romain Bourqui,&nbsp;Romain Giot,&nbsp;David Auber","doi":"10.1016/j.visinf.2022.03.001","DOIUrl":"10.1016/j.visinf.2022.03.001","url":null,"abstract":"<div><p>The design of efficient representations is well established as a fruitful way to explore and analyze complex or large data. In these representations, data are encoded with various visual attributes depending on the needs of the representation itself. To make coherent design choices about visual attributes, the visual search field proposes guidelines based on the human brain’s perception of features. However, information visualization representations frequently need to depict more data than the amount these guidelines have been validated on. Since, the information visualization community has extended these guidelines to a wider parameter space.</p><p>This paper contributes to this theme by extending visual search theories to an information visualization context. We consider a visual search task where subjects are asked to find an unknown outlier in a grid of randomly laid out distractors. Stimuli are defined by color and shape features for the purpose of visually encoding categorical data. The experimental protocol is made of a parameters space reduction step (<em>i.e.</em>, sub-sampling) based on a machine learning model, and a user evaluation to validate hypotheses and measure capacity limits. The results show that the major difficulty factor is the number of visual attributes that are used to encode the outlier. When redundantly encoded, the display heterogeneity has no effect on the task. When encoded with one attribute, the difficulty depends on that attribute heterogeneity until its capacity limit (7 for color, 5 for shape) is reached. Finally, when encoded with two attributes simultaneously, performances drop drastically even with minor heterogeneity.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000146/pdfft?md5=3a4ee1c7cac8f90eeb5e72a02337dd27&pid=1-s2.0-S2468502X22000146-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124374144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
MDISN: Learning multiscale deformed implicit fields from single images MDISN:从单个图像中学习多尺度变形隐式场
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.03.003
Yujie Wang , Yixin Zhuang , Yunzhe Liu , Baoquan Chen

We present a multiscale deformed implicit surface network (MDISN) to reconstruct 3D objects from single images by adapting the implicit surface of the target object from coarse to fine to the input image. The basic idea is to optimize the implicit surface according to the change of consecutive feature maps from the input image. And with multi-resolution feature maps, the implicit field is refined progressively, such that lower resolutions outline the main object components, and higher resolutions reveal fine-grained geometric details. To better explore the changes in feature maps, we devise a simple field deformation module that receives two consecutive feature maps to refine the implicit field with finer geometric details. Experimental results on both synthetic and real-world datasets demonstrate the superiority of the proposed method compared to state-of-the-art methods.

本文提出了一种多尺度变形隐式表面网络(MDISN),通过将目标物体的隐式表面从粗到细调整到输入图像,从单幅图像重建三维物体。其基本思想是根据输入图像连续特征映射的变化来优化隐式曲面。在多分辨率特征图中,隐式域被逐步细化,低分辨率的隐式域勾勒出目标的主要成分,高分辨率的隐式域显示出细粒度的几何细节。为了更好地探索特征图的变化,我们设计了一个简单的场变形模块,该模块接收两个连续的特征图,以更精细的几何细节来细化隐含的场。在合成和真实数据集上的实验结果表明,与最先进的方法相比,所提出的方法具有优越性。
{"title":"MDISN: Learning multiscale deformed implicit fields from single images","authors":"Yujie Wang ,&nbsp;Yixin Zhuang ,&nbsp;Yunzhe Liu ,&nbsp;Baoquan Chen","doi":"10.1016/j.visinf.2022.03.003","DOIUrl":"10.1016/j.visinf.2022.03.003","url":null,"abstract":"<div><p>We present a multiscale deformed implicit surface network (MDISN) to reconstruct 3D objects from single images by adapting the implicit surface of the target object from coarse to fine to the input image. The basic idea is to optimize the implicit surface according to the change of consecutive feature maps from the input image. And with multi-resolution feature maps, the implicit field is refined progressively, such that lower resolutions outline the main object components, and higher resolutions reveal fine-grained geometric details. To better explore the changes in feature maps, we devise a simple field deformation module that receives two consecutive feature maps to refine the implicit field with finer geometric details. Experimental results on both synthetic and real-world datasets demonstrate the superiority of the proposed method compared to state-of-the-art methods.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X2200016X/pdfft?md5=7a2c3ab7456139b67e5be7c06fdac2f5&pid=1-s2.0-S2468502X2200016X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122912047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A machine learning approach for predicting human shortest path task performance 预测人类最短路径任务性能的机器学习方法
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.04.001
Shijun Cai , Seok-Hee Hong , Xiaobo Xia , Tongliang Liu , Weidong Huang

Finding a shortest path for a given pair of vertices in a graph drawing is one of the fundamental tasks for qualitative evaluation of graph drawings. In this paper, we present the first machine learning approach to predict human shortest path task performance, including accuracy, response time, and mental effort.

To predict the shortest path task performance, we utilize correlated quality metrics and the ground truth data from the shortest path experiments. Specifically, we introduce path faithfulness metrics and show strong correlations with the shortest path task performance. Moreover, to mitigate the problem of insufficient ground truth training data, we use the transfer learning method to pre-train our deep model, exploiting the correlated quality metrics.

Experimental results using the ground truth human shortest path experiment data show that our models can successfully predict the shortest path task performance. In particular, model MSP achieves an MSE (i.e., test mean square error) of 0.7243 (i.e., data range from −17.27 to 1.81) for prediction.

图中给定顶点对的最短路径求解是图的定性评价的基本任务之一。在本文中,我们提出了第一个预测人类最短路径任务性能的机器学习方法,包括准确性、响应时间和脑力劳动。为了预测最短路径任务的性能,我们利用相关的质量指标和最短路径实验的真实数据。具体来说,我们引入了路径忠实度指标,并显示了与最短路径任务性能的强相关性。此外,为了缓解地面真值训练数据不足的问题,我们使用迁移学习方法来预训练我们的深度模型,利用相关的质量指标。使用地面真实人类最短路径实验数据的实验结果表明,我们的模型可以成功地预测最短路径任务的性能。特别是,模型MSP的预测MSE(即检验均方误差)为0.7243(即数据范围为- 17.27至1.81)。
{"title":"A machine learning approach for predicting human shortest path task performance","authors":"Shijun Cai ,&nbsp;Seok-Hee Hong ,&nbsp;Xiaobo Xia ,&nbsp;Tongliang Liu ,&nbsp;Weidong Huang","doi":"10.1016/j.visinf.2022.04.001","DOIUrl":"10.1016/j.visinf.2022.04.001","url":null,"abstract":"<div><p>Finding a shortest path for a given pair of vertices in a graph drawing is one of the fundamental tasks for qualitative evaluation of graph drawings. In this paper, we present the first machine learning approach to predict human shortest path task performance, including accuracy, response time, and mental effort.</p><p>To predict the shortest path task performance, we utilize correlated quality metrics and the ground truth data from the shortest path experiments. Specifically, we introduce <em>path faithfulness metrics</em> and show strong correlations with the shortest path task performance. Moreover, to mitigate the problem of insufficient ground truth training data, we use the transfer learning method to pre-train our deep model, exploiting the correlated quality metrics.</p><p>Experimental results using the ground truth human shortest path experiment data show that our models can successfully predict the shortest path task performance. In particular, model MSP achieves an MSE (i.e., test mean square error) of 0.7243 (i.e., data range from −17.27 to 1.81) for prediction.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000183/pdfft?md5=8b220940e42fe9792587af3d422a3e28&pid=1-s2.0-S2468502X22000183-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115328161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perspectives of visualization onboarding and guidance in VA 可视化入职和VA指导的观点
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2022.02.005
Christina Stoiber , Davide Ceneda , Markus Wagner , Victor Schetinger , Theresia Gschwandtner , Marc Streit , Silvia Miksch , Wolfgang Aigner

A typical problem in Visual Analytics (VA) is that users are highly trained experts in their application domains, but have mostly no experience in using VA systems. Thus, users often have difficulties interpreting and working with visual representations. To overcome these problems, user assistance can be incorporated into VA systems to guide experts through the analysis while closing their knowledge gaps. Different types of user assistance can be applied to extend the power of VA, enhance the user’s experience, and broaden the audience for VA. Although different approaches to visualization onboarding and guidance in VA already exist, there is a lack of research on how to design and integrate them in effective and efficient ways. Therefore, we aim at putting together the pieces of the mosaic to form a coherent whole. Based on the Knowledge-Assisted Visual Analytics model, we contribute a conceptual model of user assistance for VA by integrating the process of visualization onboarding and guidance as the two main approaches in this direction. As a result, we clarify and discuss the commonalities and differences between visualization onboarding and guidance, and discuss how they benefit from the integration of knowledge extraction and exploration. Finally, we discuss our descriptive model by applying it to VA tools integrating visualization onboarding and guidance, and showing how they should be utilized in different phases of the analysis in order to be effective and accepted by the user.

Visual Analytics (VA)中的一个典型问题是,用户在他们的应用领域中是训练有素的专家,但大多没有使用VA系统的经验。因此,用户经常在解释和处理视觉表示时遇到困难。为了克服这些问题,可以将用户协助纳入VA系统,以指导专家进行分析,同时缩小他们的知识差距。不同类型的用户辅助可以扩展虚拟现实的力量,增强用户体验,扩大虚拟现实的受众群体。虽然在虚拟现实中已有不同的可视化登录和指导方法,但如何有效、高效地设计和整合这些方法的研究还很缺乏。因此,我们的目标是将马赛克的各个部分组合在一起,形成一个连贯的整体。在知识辅助视觉分析模型的基础上,我们通过集成可视化登录和指导过程作为这一方向的两种主要方法,为VA提供了一个用户辅助的概念模型。因此,我们澄清并讨论了可视化入职和指导之间的共性和差异,并讨论了它们如何从知识提取和探索的集成中受益。最后,我们讨论了我们的描述性模型,将其应用于集成可视化登录和指导的VA工具,并展示了如何在分析的不同阶段使用它们,以使其有效并被用户接受。
{"title":"Perspectives of visualization onboarding and guidance in VA","authors":"Christina Stoiber ,&nbsp;Davide Ceneda ,&nbsp;Markus Wagner ,&nbsp;Victor Schetinger ,&nbsp;Theresia Gschwandtner ,&nbsp;Marc Streit ,&nbsp;Silvia Miksch ,&nbsp;Wolfgang Aigner","doi":"10.1016/j.visinf.2022.02.005","DOIUrl":"10.1016/j.visinf.2022.02.005","url":null,"abstract":"<div><p>A typical problem in Visual Analytics (VA) is that users are highly trained experts in their application domains, but have mostly no experience in using VA systems. Thus, users often have difficulties interpreting and working with visual representations. To overcome these problems, user assistance can be incorporated into VA systems to guide experts through the analysis while closing their knowledge gaps. Different types of user assistance can be applied to extend the power of VA, enhance the user’s experience, and broaden the audience for VA. Although different approaches to visualization onboarding and guidance in VA already exist, there is a lack of research on how to design and integrate them in effective and efficient ways. Therefore, we aim at putting together the pieces of the mosaic to form a coherent whole. Based on the Knowledge-Assisted Visual Analytics model, we contribute a conceptual model of user assistance for VA by integrating the process of visualization onboarding and guidance as the two main approaches in this direction. As a result, we clarify and discuss the commonalities and differences between visualization onboarding and guidance, and discuss how they benefit from the integration of knowledge extraction and exploration. Finally, we discuss our descriptive model by applying it to VA tools integrating visualization onboarding and guidance, and showing how they should be utilized in different phases of the analysis in order to be effective and accepted by the user.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000134/pdfft?md5=97576331780f4f0a3f95026d4dff62bd&pid=1-s2.0-S2468502X22000134-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127378808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Computing for Chinese Cultural Heritage 中国文化遗产计算
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2021.12.006
Meng Li , Yun Wang , Ying-Qing Xu

Implementing computational methods for preservation, inheritance, and promotion of Cultural Heritage (CH) has become a research trend across the world since the 1990s. In China, generations of scholars have dedicated themselves to studying the country’s rich CH resources; there are great potential and opportunities in the field of computational research on specific cultural artefacts or artforms. Based on previous works, this paper proposes a systematic framework for Chinese Cultural Heritage Computing that consists of three conceptual levels which are Chinese CH protection and development strategy, computing process, and computable cultural ecosystem. The computing process includes three modules: (1) data acquisition and processing, (2) digital modeling and database construction, and (3) data application and promotion. The modules demonstrate the computing approaches corresponding to different phases of Chinese CH protection and development, from digital preservation and inheritance to presentation and promotion. The computing results can become the basis for the generation of cultural genes and eventually the formation of computable cultural ecosystem Case studies on the Mogao caves in Dunhuang and the art of Guqin, recognized as world’s important tangible and intangible cultural heritage, are carried out to elaborate the computing process and methods within the framework. With continuous advances in data collection, processing, and display technologies, the framework can provide constructive reference for building up future research roadmaps in Chinese CH computing and related fields, for sustainable protection and development of Chinese CH in the digital age.

自20世纪90年代以来,将计算方法应用于文化遗产的保护、传承和推广已成为世界范围内的研究趋势。在中国,一代又一代的学者致力于研究这个国家丰富的CH资源;在特定的文化文物或艺术形式的计算研究领域有很大的潜力和机会。本文在前人研究的基础上,提出了中国文化遗产计算的系统框架,该框架包括中国文化遗产保护与发展战略、计算过程和可计算文化生态系统三个概念层面。计算过程包括三个模块:(1)数据采集与处理,(2)数字建模与数据库构建,(3)数据应用与推广。这些模块展示了中国文物保护和发展的不同阶段,从数字保存和传承到展示和推广的计算方法。计算结果可以成为生成文化基因的基础,最终形成可计算的文化生态系统。本文以世界重要物质和非物质文化遗产敦煌莫高窟和古琴艺术为例,阐述了在该框架下的计算过程和方法。随着数据采集、处理和显示技术的不断进步,该框架可以为构建未来汉语汉语计算及相关领域的研究路线图提供建设性的参考,为汉语汉语在数字时代的可持续保护和发展提供参考。
{"title":"Computing for Chinese Cultural Heritage","authors":"Meng Li ,&nbsp;Yun Wang ,&nbsp;Ying-Qing Xu","doi":"10.1016/j.visinf.2021.12.006","DOIUrl":"10.1016/j.visinf.2021.12.006","url":null,"abstract":"<div><p>Implementing computational methods for preservation, inheritance, and promotion of Cultural Heritage (CH) has become a research trend across the world since the 1990s. In China, generations of scholars have dedicated themselves to studying the country’s rich CH resources; there are great potential and opportunities in the field of computational research on specific cultural artefacts or artforms. Based on previous works, this paper proposes a systematic framework for Chinese Cultural Heritage Computing that consists of three conceptual levels which are Chinese CH protection and development strategy, computing process, and computable cultural ecosystem. The computing process includes three modules: (1) data acquisition and processing, (2) digital modeling and database construction, and (3) data application and promotion. The modules demonstrate the computing approaches corresponding to different phases of Chinese CH protection and development, from digital preservation and inheritance to presentation and promotion. The computing results can become the basis for the generation of cultural genes and eventually the formation of computable cultural ecosystem Case studies on the Mogao caves in Dunhuang and the art of Guqin, recognized as world’s important tangible and intangible cultural heritage, are carried out to elaborate the computing process and methods within the framework. With continuous advances in data collection, processing, and display technologies, the framework can provide constructive reference for building up future research roadmaps in Chinese CH computing and related fields, for sustainable protection and development of Chinese CH in the digital age.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X21000644/pdfft?md5=2fe78f965cb3cdd3953c49170f0417be&pid=1-s2.0-S2468502X21000644-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132932782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Reconfiguration of the brain during aesthetic experience on Chinese calligraphy—Using brain complex networks 中国书法审美体验中的脑重构——利用脑复杂网络
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2022.02.002
Rui Li , Xiaofei Jia , Changle Zhou , Junsong Zhang

Chinese calligraphy, as a well-known performing art form, occupies an important role in the intangible cultural heritage of China. Previous studies focused on the psychophysiological benefits of Chinese calligraphy. Little attention has been paid to its aesthetic attributes and effectiveness on the cognitive process. To complement our understanding of Chinese calligraphy, this study investigated the aesthetic experience of Chinese cursive-style calligraphy using brain functional network analysis. Subjects stayed on the coach and rested for several minutes. Then, they were requested to appreciate artwork of cursive-style calligraphy. Results showed that (1) changes in functional connectivity between fronto-occipital, fronto-parietal, bilateral parietal, and central–occipital areas are prominent for calligraphy condition, (2) brain functional network showed an increased normalized cluster coefficient for calligraphy condition in alpha2 and gamma bands. These results demonstrate that the brain functional network undergoes a dynamic reconfiguration during the aesthetic experience of Chinese calligraphy. Providing evidence that the aesthetic experience of Chinese calligraphy has several similarities with western art while retaining its unique characters as an eastern traditional art form.

中国书法作为一种著名的表演艺术形式,在中国非物质文化遗产中占有重要地位。以前的研究主要集中在中国书法的心理生理益处上。对其审美属性和对认知过程的影响却鲜有关注。为了补充我们对中国书法的认识,本研究利用脑功能网络分析来研究中国草书书法的审美体验。受试者在马车上休息几分钟。然后,他们被要求欣赏草书风格的书法艺术品。结果表明:(1)书法条件下大脑额枕区、额顶叶区、双侧顶叶区和中枕区功能连通性的变化显著;(2)书法条件下大脑功能网络在α 2和γ波段的归一化聚类系数增加。这些结果表明,在中国书法审美体验过程中,脑功能网络发生了动态重构。提供证据表明,中国书法的审美体验与西方艺术有许多相似之处,同时保留了其作为东方传统艺术形式的独特特征。
{"title":"Reconfiguration of the brain during aesthetic experience on Chinese calligraphy—Using brain complex networks","authors":"Rui Li ,&nbsp;Xiaofei Jia ,&nbsp;Changle Zhou ,&nbsp;Junsong Zhang","doi":"10.1016/j.visinf.2022.02.002","DOIUrl":"10.1016/j.visinf.2022.02.002","url":null,"abstract":"<div><p>Chinese calligraphy, as a well-known performing art form, occupies an important role in the intangible cultural heritage of China. Previous studies focused on the psychophysiological benefits of Chinese calligraphy. Little attention has been paid to its aesthetic attributes and effectiveness on the cognitive process. To complement our understanding of Chinese calligraphy, this study investigated the aesthetic experience of Chinese cursive-style calligraphy using brain functional network analysis. Subjects stayed on the coach and rested for several minutes. Then, they were requested to appreciate artwork of cursive-style calligraphy. Results showed that (1) changes in functional connectivity between fronto-occipital, fronto-parietal, bilateral parietal, and central–occipital areas are prominent for calligraphy condition, (2) brain functional network showed an increased normalized cluster coefficient for calligraphy condition in alpha2 and gamma bands. These results demonstrate that the brain functional network undergoes a dynamic reconfiguration during the aesthetic experience of Chinese calligraphy. Providing evidence that the aesthetic experience of Chinese calligraphy has several similarities with western art while retaining its unique characters as an eastern traditional art form.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000109/pdfft?md5=2fa49e9936c3269ce56e4e50f14e1166&pid=1-s2.0-S2468502X22000109-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129049644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
AFExplorer: Visual analysis and interactive selection of audio features AFExplorer:可视分析和交互式选择音频功能
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2022.02.003
Lei Wang, Guodao Sun, Yunchao Wang, Ji Ma, Xiaomin Zhao, Ronghua Liang

Acoustic quality detection is vital in the manufactured products quality control field since it represents the conditions of machines or products. Recent work employed machine learning models in manufactured audio data to detect anomalous patterns. A major challenge is how to select applicable audio features to meliorate model’s accuracy and precision. To relax this challenge, we extract and analyze three audio feature types including Time Domain Feature, Frequency Domain Feature, and Cepstrum Feature to help identify the potential linear and non-linear relationships. In addition, we design a visual analysis system, namely AFExplorer, to assist data scientists in extracting audio features and selecting potential feature combinations. AFExplorer integrates four main views to present detailed distribution and relevance of the audio features, which helps users observe the impact of features visually in the feature selection. We perform the case study with AFExplore according to the ToyADMOS and MIMII Dataset to demonstrate the usability and effectiveness of the proposed system.

声质量检测在制成品质量控制领域是至关重要的,因为它代表了机器或产品的状况。最近的工作在制造的音频数据中使用机器学习模型来检测异常模式。如何选择合适的音频特征来提高模型的准确性和精度是一个主要的挑战。为了缓解这一挑战,我们提取并分析了三种音频特征类型,包括时域特征、频域特征和倒频谱特征,以帮助识别潜在的线性和非线性关系。此外,我们设计了一个视觉分析系统,即AFExplorer,以帮助数据科学家提取音频特征并选择潜在的特征组合。AFExplorer集成了四个主要视图来呈现音频特性的详细分布和相关性,这有助于用户在特性选择中直观地观察特性的影响。我们根据ToyADMOS和MIMII数据集使用AFExplore进行了案例研究,以证明所提出系统的可用性和有效性。
{"title":"AFExplorer: Visual analysis and interactive selection of audio features","authors":"Lei Wang,&nbsp;Guodao Sun,&nbsp;Yunchao Wang,&nbsp;Ji Ma,&nbsp;Xiaomin Zhao,&nbsp;Ronghua Liang","doi":"10.1016/j.visinf.2022.02.003","DOIUrl":"10.1016/j.visinf.2022.02.003","url":null,"abstract":"<div><p>Acoustic quality detection is vital in the manufactured products quality control field since it represents the conditions of machines or products. Recent work employed machine learning models in manufactured audio data to detect anomalous patterns. A major challenge is how to select applicable audio features to meliorate model’s accuracy and precision. To relax this challenge, we extract and analyze three audio feature types including Time Domain Feature, Frequency Domain Feature, and Cepstrum Feature to help identify the potential linear and non-linear relationships. In addition, we design a visual analysis system, namely AFExplorer, to assist data scientists in extracting audio features and selecting potential feature combinations. AFExplorer integrates four main views to present detailed distribution and relevance of the audio features, which helps users observe the impact of features visually in the feature selection. We perform the case study with AFExplore according to the ToyADMOS and MIMII Dataset to demonstrate the usability and effectiveness of the proposed system.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000110/pdfft?md5=2e19336a69c58e5911898665e895ab79&pid=1-s2.0-S2468502X22000110-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129071883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A restoration method using dual generate adversarial networks for Chinese ancient characters 一种基于对偶生成对抗网络的汉字复原方法
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2022.02.001
Benpeng Su , Xuxing Liu , Weize Gao , Ye Yang , Shanxiong Chen

Ancient books that record the history of different periods are precious for human civilization. But the protection of them is facing serious problems such as aging. It is significant to repair the damaged characters in ancient books and restore their original textures. The requirement of the restoration of the damaged character is keeping the stroke shape correct and the font style consistent. In order to solve these problems, this paper proposes a new restoration method based on generative adversarial networks. We use the shape restoration network to complete the stroke shape recovery and the font style recovery. The texture repair network is responsible for reconstructing texture details. In order to improve the accuracy of the generator in the shape restoration network, we use the adversarial feature loss (AFL), which can update the generator and discriminator synchronously to replace the traditional perceptual loss. Meanwhile, the font style loss is proposed to maintain the stylistic consistency for the whole character. Our model is evaluated on the datasets Yi and Qing, and shows that it outperforms current state-of-the-art techniques quantitatively and qualitatively. In particular, the Structural Similarity has increased by 8.0% and 6.7% respectively on the two datasets.

记录不同时期历史的古籍对人类文明来说是宝贵的。但是对它们的保护正面临着严重的问题,比如老化。修复古籍中受损的文字,恢复其原有的肌理,具有重要的意义。修复受损汉字的要求是笔画形状正确,字体样式一致。为了解决这些问题,本文提出了一种新的基于生成对抗网络的复原方法。我们使用形状恢复网络来完成笔画形状恢复和字体样式恢复。纹理修复网络负责纹理细节的重建。为了提高形状恢复网络中产生器的精度,我们使用了对抗特征损失(AFL),它可以同步更新产生器和鉴别器来取代传统的感知损失。同时提出了字体风格损失的方法,以保持整个汉字的风格一致性。我们的模型在数据集Yi和Qing上进行了评估,并表明它在定量和定性上优于当前最先进的技术。特别是在两个数据集上,结构相似度分别提高了8.0%和6.7%。
{"title":"A restoration method using dual generate adversarial networks for Chinese ancient characters","authors":"Benpeng Su ,&nbsp;Xuxing Liu ,&nbsp;Weize Gao ,&nbsp;Ye Yang ,&nbsp;Shanxiong Chen","doi":"10.1016/j.visinf.2022.02.001","DOIUrl":"10.1016/j.visinf.2022.02.001","url":null,"abstract":"<div><p>Ancient books that record the history of different periods are precious for human civilization. But the protection of them is facing serious problems such as aging. It is significant to repair the damaged characters in ancient books and restore their original textures. The requirement of the restoration of the damaged character is keeping the stroke shape correct and the font style consistent. In order to solve these problems, this paper proposes a new restoration method based on generative adversarial networks. We use the shape restoration network to complete the stroke shape recovery and the font style recovery. The texture repair network is responsible for reconstructing texture details. In order to improve the accuracy of the generator in the shape restoration network, we use the adversarial feature loss (AFL), which can update the generator and discriminator synchronously to replace the traditional perceptual loss. Meanwhile, the font style loss is proposed to maintain the stylistic consistency for the whole character. Our model is evaluated on the datasets Yi and Qing, and shows that it outperforms current state-of-the-art techniques quantitatively and qualitatively. In particular, the Structural Similarity has increased by 8.0% and 6.7% respectively on the two datasets.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000092/pdfft?md5=d3ed2a6a34178c2af83ce73f8cd4a7d0&pid=1-s2.0-S2468502X22000092-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124418754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Metaverse: Perspectives from graphics, interactions and visualization 虚拟世界:来自图形、交互和可视化的视角
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2022.03.002
Yuheng Zhao , Jinjing Jiang , Yi Chen , Richen Liu , Yalong Yang , Xiangyang Xue , Siming Chen

The metaverse is a visual world that blends the physical world and digital world. At present, the development of the metaverse is still in the early stage, and there lacks a framework for the visual construction and exploration of the metaverse. In this paper, we propose a framework that summarizes how graphics, interaction, and visualization techniques support the visual construction of the metaverse and user-centric exploration. We introduce three kinds of visual elements that compose the metaverse and the two graphical construction methods in a pipeline. We propose a taxonomy of interaction technologies based on interaction tasks, user actions, feedback and various sensory channels, and a taxonomy of visualization techniques that assist user awareness. Current potential applications and future opportunities are discussed in the context of visual construction and exploration of the metaverse. We hope this paper can provide a stepping stone for further research in the area of graphics, interaction and visualization in the metaverse.

虚拟世界是一个融合了物理世界和数字世界的视觉世界。目前,虚拟现实的发展还处于早期阶段,缺乏一个对虚拟现实进行视觉建构和探索的框架。在本文中,我们提出了一个框架,该框架总结了图形、交互和可视化技术如何支持元世界的可视化构建和以用户为中心的探索。我们介绍了构成元空间的三种视觉元素以及管道中的两种图形化构造方法。我们提出了一种基于交互任务、用户动作、反馈和各种感官通道的交互技术分类,以及一种辅助用户感知的可视化技术分类。在视觉构建和元宇宙探索的背景下,讨论了当前潜在的应用和未来的机会。我们希望本文能够为在元宇宙中的图形、交互和可视化领域的进一步研究提供一个跳板。
{"title":"Metaverse: Perspectives from graphics, interactions and visualization","authors":"Yuheng Zhao ,&nbsp;Jinjing Jiang ,&nbsp;Yi Chen ,&nbsp;Richen Liu ,&nbsp;Yalong Yang ,&nbsp;Xiangyang Xue ,&nbsp;Siming Chen","doi":"10.1016/j.visinf.2022.03.002","DOIUrl":"10.1016/j.visinf.2022.03.002","url":null,"abstract":"<div><p>The metaverse is a visual world that blends the physical world and digital world. At present, the development of the metaverse is still in the early stage, and there lacks a framework for the visual construction and exploration of the metaverse. In this paper, we propose a framework that summarizes how graphics, interaction, and visualization techniques support the visual construction of the metaverse and user-centric exploration. We introduce three kinds of visual elements that compose the metaverse and the two graphical construction methods in a pipeline. We propose a taxonomy of interaction technologies based on interaction tasks, user actions, feedback and various sensory channels, and a taxonomy of visualization techniques that assist user awareness. Current potential applications and future opportunities are discussed in the context of visual construction and exploration of the metaverse. We hope this paper can provide a stepping stone for further research in the area of graphics, interaction and visualization in the metaverse.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000158/pdfft?md5=1995fb00e264296cfe9e1788841486de&pid=1-s2.0-S2468502X22000158-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122684892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 91
期刊
Visual Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1