首页 > 最新文献

Visual Informatics最新文献

英文 中文
Metaverse: Perspectives from graphics, interactions and visualization 虚拟世界:来自图形、交互和可视化的视角
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2022.03.002
Yuheng Zhao , Jinjing Jiang , Yi Chen , Richen Liu , Yalong Yang , Xiangyang Xue , Siming Chen

The metaverse is a visual world that blends the physical world and digital world. At present, the development of the metaverse is still in the early stage, and there lacks a framework for the visual construction and exploration of the metaverse. In this paper, we propose a framework that summarizes how graphics, interaction, and visualization techniques support the visual construction of the metaverse and user-centric exploration. We introduce three kinds of visual elements that compose the metaverse and the two graphical construction methods in a pipeline. We propose a taxonomy of interaction technologies based on interaction tasks, user actions, feedback and various sensory channels, and a taxonomy of visualization techniques that assist user awareness. Current potential applications and future opportunities are discussed in the context of visual construction and exploration of the metaverse. We hope this paper can provide a stepping stone for further research in the area of graphics, interaction and visualization in the metaverse.

虚拟世界是一个融合了物理世界和数字世界的视觉世界。目前,虚拟现实的发展还处于早期阶段,缺乏一个对虚拟现实进行视觉建构和探索的框架。在本文中,我们提出了一个框架,该框架总结了图形、交互和可视化技术如何支持元世界的可视化构建和以用户为中心的探索。我们介绍了构成元空间的三种视觉元素以及管道中的两种图形化构造方法。我们提出了一种基于交互任务、用户动作、反馈和各种感官通道的交互技术分类,以及一种辅助用户感知的可视化技术分类。在视觉构建和元宇宙探索的背景下,讨论了当前潜在的应用和未来的机会。我们希望本文能够为在元宇宙中的图形、交互和可视化领域的进一步研究提供一个跳板。
{"title":"Metaverse: Perspectives from graphics, interactions and visualization","authors":"Yuheng Zhao ,&nbsp;Jinjing Jiang ,&nbsp;Yi Chen ,&nbsp;Richen Liu ,&nbsp;Yalong Yang ,&nbsp;Xiangyang Xue ,&nbsp;Siming Chen","doi":"10.1016/j.visinf.2022.03.002","DOIUrl":"10.1016/j.visinf.2022.03.002","url":null,"abstract":"<div><p>The metaverse is a visual world that blends the physical world and digital world. At present, the development of the metaverse is still in the early stage, and there lacks a framework for the visual construction and exploration of the metaverse. In this paper, we propose a framework that summarizes how graphics, interaction, and visualization techniques support the visual construction of the metaverse and user-centric exploration. We introduce three kinds of visual elements that compose the metaverse and the two graphical construction methods in a pipeline. We propose a taxonomy of interaction technologies based on interaction tasks, user actions, feedback and various sensory channels, and a taxonomy of visualization techniques that assist user awareness. Current potential applications and future opportunities are discussed in the context of visual construction and exploration of the metaverse. We hope this paper can provide a stepping stone for further research in the area of graphics, interaction and visualization in the metaverse.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 1","pages":"Pages 56-67"},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000158/pdfft?md5=1995fb00e264296cfe9e1788841486de&pid=1-s2.0-S2468502X22000158-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122684892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 91
A learning-based approach for efficient visualization construction 基于学习的高效可视化构建方法
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2022-03-01 DOI: 10.1016/j.visinf.2022.01.001
Yongjian Sun , Jie Li , Siming Chen , Gennady Andrienko , Natalia Andrienko , Kang Zhang

We propose an approach to underpin interactive visual exploration of large data volumes by training Learned Visualization Index (LVI). Knowing in advance the data, the aggregation functions that are used for visualization, the visual encoding, and available interactive operations for data selection, LVI allows to avoid time-consuming data retrieval and processing of raw data in response to user’s interactions. Instead, LVI directly predicts aggregates of interest for the user’s data selection. We demonstrate the efficiency of the proposed approach in application to two use cases of spatio-temporal data at different scales.

我们提出了一种通过训练学习可视化索引(LVI)来支持大数据量的交互式可视化探索的方法。LVI预先知道数据、用于可视化的聚合函数、可视化编码和可用的数据选择交互操作,因此可以避免为响应用户交互而对原始数据进行耗时的数据检索和处理。相反,LVI直接预测用户数据选择的兴趣聚合。我们在两个不同尺度的时空数据用例中证明了所提出方法的有效性。
{"title":"A learning-based approach for efficient visualization construction","authors":"Yongjian Sun ,&nbsp;Jie Li ,&nbsp;Siming Chen ,&nbsp;Gennady Andrienko ,&nbsp;Natalia Andrienko ,&nbsp;Kang Zhang","doi":"10.1016/j.visinf.2022.01.001","DOIUrl":"10.1016/j.visinf.2022.01.001","url":null,"abstract":"<div><p>We propose an approach to underpin interactive visual exploration of large data volumes by training Learned Visualization Index (LVI). Knowing in advance the data, the aggregation functions that are used for visualization, the visual encoding, and available interactive operations for data selection, LVI allows to avoid time-consuming data retrieval and processing of raw data in response to user’s interactions. Instead, LVI directly predicts aggregates of interest for the user’s data selection. We demonstrate the efficiency of the proposed approach in application to two use cases of spatio-temporal data at different scales.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 1","pages":"Pages 14-25"},"PeriodicalIF":3.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000080/pdfft?md5=16523953bb5f7df328c6c78d0aaff5fa&pid=1-s2.0-S2468502X22000080-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127694326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Natural multimodal interaction in immersive flow visualization 沉浸式流可视化中的自然多模态交互
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2021-12-01 DOI: 10.1016/j.visinf.2021.12.005
Chengyu Su , Chao Yang , Yonghui Chen , Fupan Wang , Fang Wang , Yadong Wu , Xiaorong Zhang

In the immersive flow visualization based on virtual reality, how to meet the needs of complex professional flow visualization analysis by natural human–computer interaction is a pressing problem. In order to achieve the natural and efficient human–computer interaction, we analyze the interaction requirements of flow visualization and study the characteristics of four human–computer interaction channels: hand, head, eye and voice. We give out some multimodal interaction design suggestions and then propose three multimodal interaction methods: head & hand, head & hand & eye and head & hand & eye & voice. The freedom of gestures, the stability of the head, the convenience of eyes and the rapid retrieval of voices are used to improve the accuracy and efficiency of interaction. The interaction load is balanced by multimodal interaction to reduce fatigue. The evaluation shows that our multimodal interaction has higher accuracy, faster time efficiency and much lower fatigue than the traditional joystick interaction.

在基于虚拟现实的沉浸式流可视化中,如何通过自然的人机交互来满足复杂的专业流可视化分析需求是一个亟待解决的问题。为了实现自然高效的人机交互,分析了流程可视化的交互需求,研究了手、头、眼、声四种人机交互通道的特点。给出了多模态交互设计建议,并提出了三种多模态交互方法:head &手、头& &;的手,眼睛和头部& &;的手,的眼睛,的声音。利用手势的自由、头部的稳定、眼睛的方便和声音的快速检索来提高交互的准确性和效率。通过多模态交互平衡交互载荷,减少疲劳。评估结果表明,与传统的操纵杆交互相比,该多模态交互具有更高的精度、更快的时间效率和更低的疲劳。
{"title":"Natural multimodal interaction in immersive flow visualization","authors":"Chengyu Su ,&nbsp;Chao Yang ,&nbsp;Yonghui Chen ,&nbsp;Fupan Wang ,&nbsp;Fang Wang ,&nbsp;Yadong Wu ,&nbsp;Xiaorong Zhang","doi":"10.1016/j.visinf.2021.12.005","DOIUrl":"10.1016/j.visinf.2021.12.005","url":null,"abstract":"<div><p>In the immersive flow visualization based on virtual reality, how to meet the needs of complex professional flow visualization analysis by natural human–computer interaction is a pressing problem. In order to achieve the natural and efficient human–computer interaction, we analyze the interaction requirements of flow visualization and study the characteristics of four human–computer interaction channels: hand, head, eye and voice. We give out some multimodal interaction design suggestions and then propose three multimodal interaction methods: head &amp; hand, head &amp; hand &amp; eye and head &amp; hand &amp; eye &amp; voice. The freedom of gestures, the stability of the head, the convenience of eyes and the rapid retrieval of voices are used to improve the accuracy and efficiency of interaction. The interaction load is balanced by multimodal interaction to reduce fatigue. The evaluation shows that our multimodal interaction has higher accuracy, faster time efficiency and much lower fatigue than the traditional joystick interaction.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"5 4","pages":"Pages 56-66"},"PeriodicalIF":3.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X21000632/pdfft?md5=28eaf01c7c9f0094b805091c154c6864&pid=1-s2.0-S2468502X21000632-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133259156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Visual storytelling of Song Ci and the poets in the social–cultural context of Song dynasty 宋词的视觉叙事与宋代社会文化语境中的诗人
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2021-12-01 DOI: 10.1016/j.visinf.2021.12.002
Wei Zhang , Qian Ma , Rusheng Pan , Wei Chen

Song Ci is treasured in traditional Chinese culture, which indicates social and cultural evolution in ancient times. Despite the efforts by historians and litterateurs in investigating the characteristics of Song Ci, it is still unclear how to effectively distribute and promote Song Ci in the public sphere. The complexity and abstraction of Song Ci hamper the general public from closely reading, analyzing, and appreciating these excellent works. By means of a set of visual analysis methods, e.g. the spatio-temporal visualization, we exploit visual storytelling to explicitly present the latent and abstractive features of Song Ci. We apply straightway visual charts and lighten the burden of understanding the stories, in order to achieve an effective public distribution. The effectiveness and aesthetics of our work are demonstrated by a user study of three participants with different backgrounds. The result reveals that our story is effective in the distribution, understanding, and promotion of Song Ci.

宋词是中国传统文化的瑰宝,反映了古代社会文化的演变。尽管历史学家和文学家对宋词的特点进行了研究,但如何在公共领域有效地传播和推广宋词,仍然是一个不明确的问题。宋词的复杂性和抽象性阻碍了普通大众对这些优秀作品的仔细阅读、分析和欣赏。通过时空可视化等视觉分析方法,利用视觉叙事的方式,将宋词的潜在性和抽象性表现出来。我们采用直观的视觉图表,减轻理解故事的负担,以达到有效的公众传播。通过对三个不同背景的参与者的用户研究,证明了我们工作的有效性和美观性。结果表明,我们的故事在传播、理解和推广宋词方面是有效的。
{"title":"Visual storytelling of Song Ci and the poets in the social–cultural context of Song dynasty","authors":"Wei Zhang ,&nbsp;Qian Ma ,&nbsp;Rusheng Pan ,&nbsp;Wei Chen","doi":"10.1016/j.visinf.2021.12.002","DOIUrl":"10.1016/j.visinf.2021.12.002","url":null,"abstract":"<div><p>Song Ci is treasured in traditional Chinese culture, which indicates social and cultural evolution in ancient times. Despite the efforts by historians and litterateurs in investigating the characteristics of Song Ci, it is still unclear how to effectively distribute and promote Song Ci in the public sphere. The complexity and abstraction of Song Ci hamper the general public from closely reading, analyzing, and appreciating these excellent works. By means of a set of visual analysis methods, e.g. the spatio-temporal visualization, we exploit visual storytelling to explicitly present the latent and abstractive features of Song Ci. We apply straightway visual charts and lighten the burden of understanding the stories, in order to achieve an effective public distribution. The effectiveness and aesthetics of our work are demonstrated by a user study of three participants with different backgrounds. The result reveals that our story is effective in the distribution, understanding, and promotion of Song Ci.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"5 4","pages":"Pages 34-40"},"PeriodicalIF":3.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X21000607/pdfft?md5=7de35aa21498ec269f63ff4c547c971e&pid=1-s2.0-S2468502X21000607-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121839735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Adaptive neighbor constrained deviation sparse variant fuzzy c-means clustering for brain MRI of AD subject AD受试者脑MRI自适应邻域约束偏差稀疏变模糊c均值聚类
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2021-12-01 DOI: 10.1016/j.visinf.2021.12.001
Sukanta Ghosh, Amlan Pratim Hazarika, Abhijit Chandra, Rajani K. Mudi

Progression of Alzheimer’s disease (AD) bears close proximity with the tissue loss in the medial temporal lobe (MTL) and enlargement of lateral ventricle (LV). The early stage of AD, mild cognitive impairment (MCI), can be traced by diagnosing brain MRI scans with advanced fuzzy c-means clustering algorithm that helps to take an appropriate intervention. In this paper, firstly the sparsity is initiated in clustering method that too rician noise is also incorporated for brain MR scans of AD subject. Secondly, a novel neighbor pixel constrained fuzzy c-means clustering algorithm is designed where topoloty-based selection of parsimonious neighbor pixels is automated. The adaptability in choice of neighbor pixel class outliers more justified object edge boundary which outperforms a dynamic cluster output. The proposed adaptive neighbor constrained deviation sparse variant fuzzy c-means clustering (AN_DsFCM) can withhold imposed sparsity and withstands rician noise at imposed sparse environment. This novel algorithm is applied for MRI of AD subjects and normative data is acquired to analyse clustering accuracy. The data processing pipeline of theoretically plausible proposition is elaborated in detail. The experimental results are compared with state-of-the-art fuzzy clustering methods for test MRI scans. Visual evaluation and statistical measures are studied to meet both image processing and clinical neurophysiology standards. Overall the performance of proposed AN_DsFCM is significantly better than other methods.

阿尔茨海默病(AD)的进展与内侧颞叶(MTL)组织丢失和侧脑室(LV)增大密切相关。早期AD轻度认知障碍(mild cognitive impairment, MCI)可以通过高级模糊c均值聚类算法(fuzzy c-means clustering algorithm)的脑MRI扫描诊断进行追踪,有助于采取适当的干预措施。本文首先在聚类方法中引入稀疏性,并在聚类方法中引入过多的噪声。其次,设计了一种新的邻域约束模糊c均值聚类算法,该算法基于拓扑自动选择精简邻域像素;在选择邻居像素类异常点方面的适应性使目标边缘边界更加合理,优于动态聚类输出。本文提出的自适应邻域约束偏差稀疏变模糊c均值聚类(AN_DsFCM)可以在强制稀疏环境下保留强制稀疏性并抵抗噪声。将该算法应用于AD受试者的MRI,并获取规范数据进行聚类精度分析。详细阐述了理论似然命题的数据处理流程。实验结果与最先进的模糊聚类方法的测试MRI扫描进行了比较。研究了视觉评价和统计措施,以满足图像处理和临床神经生理学标准。总体而言,所提出的AN_DsFCM的性能明显优于其他方法。
{"title":"Adaptive neighbor constrained deviation sparse variant fuzzy c-means clustering for brain MRI of AD subject","authors":"Sukanta Ghosh,&nbsp;Amlan Pratim Hazarika,&nbsp;Abhijit Chandra,&nbsp;Rajani K. Mudi","doi":"10.1016/j.visinf.2021.12.001","DOIUrl":"10.1016/j.visinf.2021.12.001","url":null,"abstract":"<div><p>Progression of Alzheimer’s disease (AD) bears close proximity with the tissue loss in the medial temporal lobe (MTL) and enlargement of lateral ventricle (LV). The early stage of AD, mild cognitive impairment (MCI), can be traced by diagnosing brain MRI scans with advanced fuzzy c-means clustering algorithm that helps to take an appropriate intervention. In this paper, firstly the sparsity is initiated in clustering method that too rician noise is also incorporated for brain MR scans of AD subject. Secondly, a novel neighbor pixel constrained fuzzy c-means clustering algorithm is designed where topoloty-based selection of parsimonious neighbor pixels is automated. The adaptability in choice of neighbor pixel class outliers more justified object edge boundary which outperforms a dynamic cluster output. The proposed adaptive neighbor constrained deviation sparse variant fuzzy c-means clustering (AN_DsFCM) can withhold imposed sparsity and withstands rician noise at imposed sparse environment. This novel algorithm is applied for MRI of AD subjects and normative data is acquired to analyse clustering accuracy. The data processing pipeline of theoretically plausible proposition is elaborated in detail. The experimental results are compared with state-of-the-art fuzzy clustering methods for test MRI scans. Visual evaluation and statistical measures are studied to meet both image processing and clinical neurophysiology standards. Overall the performance of proposed AN_DsFCM is significantly better than other methods.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"5 4","pages":"Pages 67-80"},"PeriodicalIF":3.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X21000589/pdfft?md5=129443dd9a0b4930d3d376ba5efb1317&pid=1-s2.0-S2468502X21000589-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128584891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Asyncflow: A visual programming tool for game artificial intelligence Asyncflow:游戏人工智能的可视化编程工具
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2021-12-01 DOI: 10.1016/j.visinf.2021.11.001
Zhipeng Hu , Changjie Fan , Qiwei Zheng , Wei Wu , Bai Liu

Visual programming tools are widely applied in the game industry to assist game designers in developing game artificial intelligence (game AI) and gameplay. However, testing multiple game engines is a time-consuming operation, which degrades development efficiency. To provide an asynchronous platform for game designers, this paper introduces Asyncflow, an open-source visual programming solution. It consists of a flowchart maker for game logic explanation and a runtime framework integrating an asynchronous mechanism based on an event-driven architecture. Asyncflow supports multiple programming languages and can be easily embedded in various game engines to run flowcharts created by game designers.

可视化编程工具广泛应用于游戏行业,以帮助游戏设计师开发游戏人工智能(游戏AI)和游戏玩法。然而,测试多个游戏引擎是一项耗时的操作,这会降低开发效率。为了给游戏设计师提供一个异步平台,本文介绍了一个开源的可视化编程解决方案Asyncflow。它由用于游戏逻辑解释的流程图生成器和基于事件驱动架构的集成异步机制的运行时框架组成。Asyncflow支持多种编程语言,可以很容易地嵌入到各种游戏引擎中,以运行游戏设计师创建的流程图。
{"title":"Asyncflow: A visual programming tool for game artificial intelligence","authors":"Zhipeng Hu ,&nbsp;Changjie Fan ,&nbsp;Qiwei Zheng ,&nbsp;Wei Wu ,&nbsp;Bai Liu","doi":"10.1016/j.visinf.2021.11.001","DOIUrl":"10.1016/j.visinf.2021.11.001","url":null,"abstract":"<div><p>Visual programming tools are widely applied in the game industry to assist game designers in developing game artificial intelligence (game AI) and gameplay. However, testing multiple game engines is a time-consuming operation, which degrades development efficiency. To provide an asynchronous platform for game designers, this paper introduces <em>Asyncflow</em>, an open-source visual programming solution. It consists of a flowchart maker for game logic explanation and a runtime framework integrating an asynchronous mechanism based on an event-driven architecture. <em>Asyncflow</em> supports multiple programming languages and can be easily embedded in various game engines to run flowcharts created by game designers.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"5 4","pages":"Pages 20-25"},"PeriodicalIF":3.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X21000498/pdfft?md5=665ce823bd9a7b3c5b3dc285847edd6f&pid=1-s2.0-S2468502X21000498-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125256391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visualization and visual analysis of vessel trajectory data: A survey 船舶轨迹数据的可视化与可视化分析综述
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2021-12-01 DOI: 10.1016/j.visinf.2021.10.002
Haiyan Liu , Xiaohui Chen , Yidi Wang , Bing Zhang , Yunpeng Chen , Ying Zhao , Fangfang Zhou

Maritime transports play a critical role in international trade and commerce. Massive vessels sailing around the world continuously generate vessel trajectory data that contain rich spatial–temporal patterns of vessel navigations. Analyzing and understanding these patterns are valuable for maritime traffic surveillance and management. As essential techniques in complex data analysis and understanding, visualization and visual analysis have been widely used in vessel trajectory data analysis. This paper presents a literature review on the visualization and visual analysis of vessel trajectory data. First, we introduce commonly used vessel trajectory data sets and summarize main operations in vessel trajectory data preprocessing. Then, we provide a taxonomy of visualization and visual analysis of vessel trajectory data based on existing approaches and introduce representative works in details. Finally, we expound on the prospects of the remaining challenges and directions for future research.

海运在国际贸易和商业中起着至关重要的作用。世界各地航行的大量船舶不断产生船舶轨迹数据,这些数据包含丰富的船舶航行时空模式。分析和理解这些模式对海上交通监控和管理具有重要意义。可视化和可视化分析作为复杂数据分析和理解的关键技术,在船舶轨迹数据分析中得到了广泛的应用。本文对船舶航迹数据的可视化和可视化分析进行了综述。首先介绍了常用的船舶轨迹数据集,总结了船舶轨迹数据预处理的主要操作。然后,在现有方法的基础上,对船舶轨迹数据可视化和可视化分析进行了分类,并详细介绍了代表性的研究成果。最后,对研究中存在的挑战和未来的研究方向进行了展望。
{"title":"Visualization and visual analysis of vessel trajectory data: A survey","authors":"Haiyan Liu ,&nbsp;Xiaohui Chen ,&nbsp;Yidi Wang ,&nbsp;Bing Zhang ,&nbsp;Yunpeng Chen ,&nbsp;Ying Zhao ,&nbsp;Fangfang Zhou","doi":"10.1016/j.visinf.2021.10.002","DOIUrl":"10.1016/j.visinf.2021.10.002","url":null,"abstract":"<div><p>Maritime transports play a critical role in international trade and commerce. Massive vessels sailing around the world continuously generate vessel trajectory data that contain rich spatial–temporal patterns of vessel navigations. Analyzing and understanding these patterns are valuable for maritime traffic surveillance and management. As essential techniques in complex data analysis and understanding, visualization and visual analysis have been widely used in vessel trajectory data analysis. This paper presents a literature review on the visualization and visual analysis of vessel trajectory data. First, we introduce commonly used vessel trajectory data sets and summarize main operations in vessel trajectory data preprocessing. Then, we provide a taxonomy of visualization and visual analysis of vessel trajectory data based on existing approaches and introduce representative works in details. Finally, we expound on the prospects of the remaining challenges and directions for future research.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"5 4","pages":"Pages 1-10"},"PeriodicalIF":3.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X21000401/pdfft?md5=c0cbcf1e197c069abdaada9d25a133d0&pid=1-s2.0-S2468502X21000401-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127063228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Evaluating user cognition of network diagrams 评价用户对网络图的认知
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2021-12-01 DOI: 10.1016/j.visinf.2021.12.004
Xiaojiao Chen , Xiaoteng Tang , Zijing Luo , Jiayi Zhang

Edges crossing and nodes overlapping have a significant effect on the users’ recognition and comprehension of network diagrams. In this study, we propose a visual evaluation method for users’ cognition of network diagrams. First, this method carries out a set of cognitive experiments to collect the user’s cognitive performance that affects the variables, including accuracy and response time. The user’s pupil diameter is measured through an eye tracker to reflect their cognitive load. Second, the significance test points out the visual features as independent variables and then establishes an evaluation regression model. The experimental results show that the number of edges, edge length, node visual interference, and edge occlusion contribute to the evaluation models of response time, and edge occlusion and the number of node connections contribute to the accuracy model. Finally, these evaluation models demonstrate good predictability when assessing users’ cognition of network diagrams and provide practical recommendations for their use.

边缘交叉和节点重叠对用户对网络图的识别和理解有重要影响。在本研究中,我们提出了一种用户对网络图认知的视觉评价方法。首先,该方法通过一组认知实验,收集影响准确率和响应时间等变量的用户认知表现。用户的瞳孔直径通过眼动仪测量,以反映他们的认知负荷。其次,通过显著性检验指出视觉特征为自变量,建立评价回归模型;实验结果表明,边缘数量、边缘长度、节点视觉干扰和边缘遮挡有助于响应时间的评估模型,边缘遮挡和节点连接数有助于准确性模型。最后,这些评估模型在评估用户对网络图的认知时显示出良好的可预测性,并为其使用提供了实用的建议。
{"title":"Evaluating user cognition of network diagrams","authors":"Xiaojiao Chen ,&nbsp;Xiaoteng Tang ,&nbsp;Zijing Luo ,&nbsp;Jiayi Zhang","doi":"10.1016/j.visinf.2021.12.004","DOIUrl":"10.1016/j.visinf.2021.12.004","url":null,"abstract":"<div><p>Edges crossing and nodes overlapping have a significant effect on the users’ recognition and comprehension of network diagrams. In this study, we propose a visual evaluation method for users’ cognition of network diagrams. First, this method carries out a set of cognitive experiments to collect the user’s cognitive performance that affects the variables, including accuracy and response time. The user’s pupil diameter is measured through an eye tracker to reflect their cognitive load. Second, the significance test points out the visual features as independent variables and then establishes an evaluation regression model. The experimental results show that the number of edges, edge length, node visual interference, and edge occlusion contribute to the evaluation models of response time, and edge occlusion and the number of node connections contribute to the accuracy model. Finally, these evaluation models demonstrate good predictability when assessing users’ cognition of network diagrams and provide practical recommendations for their use.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"5 4","pages":"Pages 26-33"},"PeriodicalIF":3.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X21000620/pdfft?md5=cb42e16ec678484dc019ff590291645b&pid=1-s2.0-S2468502X21000620-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132205860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
G6: A web-based library for graph visualization G6:基于web的图形可视化库
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2021-12-01 DOI: 10.1016/j.visinf.2021.12.003
Yanyan Wang , Zhanning Bai , Zhifeng Lin , Xiaoqing Dong , Yingchaojie Feng , Jiacheng Pan , Wei Chen

Authoring graph visualization poses great challenges to developers due to its high requirements on both domain knowledge and development skills. Although existing libraries and tools reduce the difficulty of generating graph visualization, there are still many challenges. We work closely with developers and formulate several design goals, then design and implement G6, a web-based library for graph visualization. It combines template-based configuration for high usability and flexible customization for high expressiveness. To enhance development efficiency, G6 proposes a range of optimizations, including state management and interaction modes. We demonstrate its capabilities through an extensive gallery, a quantitative performance evaluation, and an expert interview. G6 was first released in 2017 and has been iterated for 317 versions. It has served as a web-based library for thousands of applications and received 8312 stars on GitHub.

由于图形可视化对领域知识和开发技能的要求很高,因此对开发人员提出了很大的挑战。尽管现有的库和工具降低了生成图形可视化的难度,但仍然存在许多挑战。我们与开发人员密切合作,制定了几个设计目标,然后设计和实现了G6,一个基于web的图形可视化库。它结合了基于模板的配置以实现高可用性和灵活的自定义以实现高表达性。为了提高开发效率,G6提出了一系列优化措施,包括状态管理和交互模式。我们通过广泛的画廊,定量绩效评估和专家访谈来展示其能力。G6于2017年首次发布,已经迭代了317个版本。它已经成为一个基于网络的库,提供了数千个应用程序,并在GitHub上获得了8312颗星。
{"title":"G6: A web-based library for graph visualization","authors":"Yanyan Wang ,&nbsp;Zhanning Bai ,&nbsp;Zhifeng Lin ,&nbsp;Xiaoqing Dong ,&nbsp;Yingchaojie Feng ,&nbsp;Jiacheng Pan ,&nbsp;Wei Chen","doi":"10.1016/j.visinf.2021.12.003","DOIUrl":"10.1016/j.visinf.2021.12.003","url":null,"abstract":"<div><p>Authoring graph visualization poses great challenges to developers due to its high requirements on both domain knowledge and development skills. Although existing libraries and tools reduce the difficulty of generating graph visualization, there are still many challenges. We work closely with developers and formulate several design goals, then design and implement G6, a web-based library for graph visualization. It combines template-based configuration for high usability and flexible customization for high expressiveness. To enhance development efficiency, G6 proposes a range of optimizations, including state management and interaction modes. We demonstrate its capabilities through an extensive gallery, a quantitative performance evaluation, and an expert interview. G6 was first released in 2017 and has been iterated for 317 versions. It has served as a web-based library for thousands of applications and received 8312 stars on GitHub.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"5 4","pages":"Pages 49-55"},"PeriodicalIF":3.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X21000619/pdfft?md5=a1d76898fda1924291672cf6aef0191a&pid=1-s2.0-S2468502X21000619-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128521952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Image Captioning with multi-level similarity-guided semantic matching 基于多级相似度引导语义匹配的图像字幕
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2021-12-01 DOI: 10.1016/j.visinf.2021.11.003
Jiesi Li , Ning Xu , Weizhi Nie , Shenyuan Zhang

Image Captioning is a cross-modal task that needs to automatically generate coherent natural sentences to describe the image contents. Due to the large gap between vision and language modalities, most of the existing methods have the problem of inaccurate semantic matching between images and generated captions. To solve the problem, this paper proposes a novel multi-level similarity-guided semantic matching method for image captioning, which can fuse local and global semantic similarities to learn the latent semantic correlation between images and generated captions. Specifically, we extract the semantic units containing fine-grained semantic information of images and generated captions, respectively. Based on the comparison of the semantic units, we design a local semantic similarity evaluation mechanism. Meanwhile, we employ the CIDEr score to characterize the global semantic similarity. The local and global two-level similarities are finally fused using the reinforcement learning theory, to guide the model optimization to obtain better semantic matching. The quantitative and qualitative experiments on large-scale MSCOCO dataset illustrate the superiority of the proposed method, which can achieve fine-grained semantic matching of images and generated captions.

图像字幕是一项跨模态任务,需要自动生成连贯的自然句子来描述图像内容。由于视觉和语言模式之间存在较大的差异,现有的大多数方法存在图像与生成的字幕之间语义匹配不准确的问题。为了解决这一问题,本文提出了一种新的多级相似度引导的图像字幕语义匹配方法,该方法可以融合局部和全局的语义相似度,从而学习图像与生成的字幕之间的潜在语义相关性。具体来说,我们分别提取图像和生成的标题中包含细粒度语义信息的语义单元。在语义单元比较的基础上,设计了局部语义相似度评价机制。同时,我们使用CIDEr评分来描述全局语义相似度。最后利用强化学习理论融合局部和全局两级相似度,指导模型优化以获得更好的语义匹配。在大规模MSCOCO数据集上的定量和定性实验证明了该方法的优越性,该方法可以实现图像和生成的标题的细粒度语义匹配。
{"title":"Image Captioning with multi-level similarity-guided semantic matching","authors":"Jiesi Li ,&nbsp;Ning Xu ,&nbsp;Weizhi Nie ,&nbsp;Shenyuan Zhang","doi":"10.1016/j.visinf.2021.11.003","DOIUrl":"10.1016/j.visinf.2021.11.003","url":null,"abstract":"<div><p>Image Captioning is a cross-modal task that needs to automatically generate coherent natural sentences to describe the image contents. Due to the large gap between vision and language modalities, most of the existing methods have the problem of inaccurate semantic matching between images and generated captions. To solve the problem, this paper proposes a novel multi-level similarity-guided semantic matching method for image captioning, which can fuse local and global semantic similarities to learn the latent semantic correlation between images and generated captions. Specifically, we extract the semantic units containing fine-grained semantic information of images and generated captions, respectively. Based on the comparison of the semantic units, we design a local semantic similarity evaluation mechanism. Meanwhile, we employ the CIDEr score to characterize the global semantic similarity. The local and global two-level similarities are finally fused using the reinforcement learning theory, to guide the model optimization to obtain better semantic matching. The quantitative and qualitative experiments on large-scale MSCOCO dataset illustrate the superiority of the proposed method, which can achieve fine-grained semantic matching of images and generated captions.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"5 4","pages":"Pages 41-48"},"PeriodicalIF":3.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X21000590/pdfft?md5=f944bc3d86f6d64595ece2bbaa4a94c8&pid=1-s2.0-S2468502X21000590-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124498801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
Visual Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1