首页 > 最新文献

Visual Informatics最新文献

英文 中文
IVMS: An immersive virtual meteorological sandbox based on WYSIWYG IVMS:基于所见即所得的沉浸式虚拟气象沙盒
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-01 DOI: 10.1016/j.visinf.2023.08.001
Hao Hu, Song Wang, Yonghui Chen

A novel approach to visually represent meteorological data has emerged with the maturation of Immersive Analytics (IA). We have proposed an immersive meteorological virtual sandbox as a solution to the limitations of 2D analysis in expressing and perceiving data. This innovative visual method enables users to interact directly with data through non-contact aerial gestures (NCAG). Referring to the “What you see is what you get” concept in scientific visualization, we proposed a novel approach for the visual exploration of meteorological data that aims to immerse users in the analysis process. We hope this approach can inspire immersive visualization techniques for other types of geographic data as well. Finally, we conducted a user questionnaire to evaluate our system and work. The evaluation results demonstrate that our system effectively reduces cognitive burden, alleviates mental workload, and enhances users’ retention of analysis findings.

随着沉浸式分析(IA)的成熟,出现了一种可视化表示气象数据的新方法。我们提出了一个沉浸式气象虚拟沙盒,作为二维分析在表达和感知数据方面的局限性的解决方案。这种创新的视觉方法使用户能够通过非接触式空中手势(NCAG)直接与数据交互。参考科学可视化中的“所见即所得”概念,我们提出了一种新的气象数据可视化探索方法,旨在让用户沉浸在分析过程中。我们希望这种方法也能激发其他类型地理数据的沉浸式可视化技术。最后,我们进行了用户问卷调查来评估我们的系统和工作。评价结果表明,系统有效地减轻了认知负担,减轻了心理负担,增强了用户对分析结果的记忆。
{"title":"IVMS: An immersive virtual meteorological sandbox based on WYSIWYG","authors":"Hao Hu,&nbsp;Song Wang,&nbsp;Yonghui Chen","doi":"10.1016/j.visinf.2023.08.001","DOIUrl":"10.1016/j.visinf.2023.08.001","url":null,"abstract":"<div><p>A novel approach to visually represent meteorological data has emerged with the maturation of Immersive Analytics (IA). We have proposed an immersive meteorological virtual sandbox as a solution to the limitations of 2D analysis in expressing and perceiving data. This innovative visual method enables users to interact directly with data through non-contact aerial gestures (NCAG). Referring to the “What you see is what you get” concept in scientific visualization, we proposed a novel approach for the visual exploration of meteorological data that aims to immerse users in the analysis process. We hope this approach can inspire immersive visualization techniques for other types of geographic data as well. Finally, we conducted a user questionnaire to evaluate our system and work. The evaluation results demonstrate that our system effectively reduces cognitive burden, alleviates mental workload, and enhances users’ retention of analysis findings.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 100-109"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000384/pdfft?md5=cc9d521a9365aafbe68c4d864b3827fc&pid=1-s2.0-S2468502X23000384-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73112441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of immersive visualization: Focus on perception and interaction 沉浸式可视化研究综述:关注感知与互动
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-01 DOI: 10.1016/j.visinf.2023.10.003
Yue Zhang, Zhenyuan Wang, Jinhui Zhang, Guihua Shan, Dong Tian

Immersive visualization utilizes virtual reality, mixed reality devices, and other interactive devices to create a novel visual environment that integrates multimodal perception and interaction. This technology has been maturing in recent years and has found broad applications in various fields. Based on the latest research advancements in visualization, this paper summarizes the state-of-the-art work in immersive visualization from the perspectives of multimodal perception and interaction in immersive environments, additionally discusses the current hardware foundations of immersive setups. By examining the design patterns and research approaches of previous immersive methods, the paper reveals the design factors for multimodal perception and interaction in current immersive environments. Furthermore, the challenges and development trends of immersive multimodal perception and interaction techniques are discussed, and potential areas of growth in immersive visualization design directions are explored.

沉浸式可视化利用虚拟现实、混合现实设备和其他交互设备来创建集成多模态感知和交互的新颖视觉环境。近年来,该技术日趋成熟,在各个领域都有广泛的应用。本文基于当前可视化领域的最新研究进展,从沉浸式环境下的多模态感知和交互的角度总结了沉浸式可视化研究的最新进展,并讨论了当前沉浸式设备的硬件基础。通过对以往沉浸式方法的设计模式和研究方法的分析,揭示了当前沉浸式环境中多模态感知和交互的设计因素。讨论了沉浸式多模态感知与交互技术面临的挑战和发展趋势,并探讨了沉浸式可视化设计方向的潜在增长领域。
{"title":"A survey of immersive visualization: Focus on perception and interaction","authors":"Yue Zhang,&nbsp;Zhenyuan Wang,&nbsp;Jinhui Zhang,&nbsp;Guihua Shan,&nbsp;Dong Tian","doi":"10.1016/j.visinf.2023.10.003","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.10.003","url":null,"abstract":"<div><p>Immersive visualization utilizes virtual reality, mixed reality devices, and other interactive devices to create a novel visual environment that integrates multimodal perception and interaction. This technology has been maturing in recent years and has found broad applications in various fields. Based on the latest research advancements in visualization, this paper summarizes the state-of-the-art work in immersive visualization from the perspectives of multimodal perception and interaction in immersive environments, additionally discusses the current hardware foundations of immersive setups. By examining the design patterns and research approaches of previous immersive methods, the paper reveals the design factors for multimodal perception and interaction in current immersive environments. Furthermore, the challenges and development trends of immersive multimodal perception and interaction techniques are discussed, and potential areas of growth in immersive visualization design directions are explored.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 22-35"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000499/pdfft?md5=ca15c57dc835b96ed696bf3f8614e814&pid=1-s2.0-S2468502X23000499-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138467322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TopicBubbler: An interactive visual analytics system for cross-level fine-grained exploration of social media data TopicBubbler:一个交互式可视化分析系统,用于跨级别细粒度的社交媒体数据探索
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-12-01 DOI: 10.1016/j.visinf.2023.08.002
Jielin Feng , Kehao Wu , Siming Chen

How to explore fine-grained but meaningful information from the massive amount of social media data is critical but challenging. To address this challenge, we propose the TopicBubbler, a visual analytics system that supports the cross-level fine-grained exploration of social media data. To achieve the goal of cross-level fine-grained exploration, we propose a new workflow. Under the procedure of the workflow, we construct the fine-grained exploration view through the design of bubble-based word clouds. Each bubble contains two rings that can display information through different levels, and recommends six keywords computed by different algorithms. The view supports users collecting information at different levels and to perform fine-grained selection and exploration across different levels based on keyword recommendations. To enable the users to explore the temporal information and the hierarchical structure, we also construct the Temporal View and Hierarchical View, which satisfy users to view the cross-level dynamic trends and the overview hierarchical structure. In addition, we use the storyline metaphor to enable users to consolidate the fragmented information extracted across levels and topics and ultimately present it as a complete story. Case studies from real-world data confirm the capability of the TopicBubbler from different perspectives, including event mining across levels and topics, and fine-grained mining of specific topics to capture events hidden beneath the surface.

如何从大量的社交媒体数据中挖掘出细粒度但有意义的信息是至关重要但具有挑战性的。为了应对这一挑战,我们提出了TopicBubbler,这是一个可视化分析系统,支持对社交媒体数据进行跨级别细粒度的探索。为了实现跨层细粒度探索的目标,提出了一种新的工作流程。在工作流的流程下,我们通过设计基于气泡的词云来构建细粒度的探索视图。每个气泡包含两个环,可以通过不同的层次显示信息,并推荐六个由不同算法计算出的关键词。该视图支持用户在不同级别收集信息,并根据关键字建议在不同级别执行细粒度选择和探索。为了使用户能够浏览时间信息和层次结构,我们还构建了时间视图和层次视图,以满足用户查看跨层动态趋势和概述层次结构。此外,我们使用故事情节隐喻,使用户能够整合跨关卡和主题提取的碎片信息,并最终将其呈现为一个完整的故事。来自真实世界数据的案例研究从不同的角度证实了TopicBubbler的能力,包括跨级别和主题的事件挖掘,以及对特定主题的细粒度挖掘,以捕获隐藏在表面之下的事件。
{"title":"TopicBubbler: An interactive visual analytics system for cross-level fine-grained exploration of social media data","authors":"Jielin Feng ,&nbsp;Kehao Wu ,&nbsp;Siming Chen","doi":"10.1016/j.visinf.2023.08.002","DOIUrl":"10.1016/j.visinf.2023.08.002","url":null,"abstract":"<div><p>How to explore fine-grained but meaningful information from the massive amount of social media data is critical but challenging. To address this challenge, we propose the TopicBubbler, a visual analytics system that supports the cross-level fine-grained exploration of social media data. To achieve the goal of cross-level fine-grained exploration, we propose a new workflow. Under the procedure of the workflow, we construct the fine-grained exploration view through the design of bubble-based word clouds. Each bubble contains two rings that can display information through different levels, and recommends six keywords computed by different algorithms. The view supports users collecting information at different levels and to perform fine-grained selection and exploration across different levels based on keyword recommendations. To enable the users to explore the temporal information and the hierarchical structure, we also construct the Temporal View and Hierarchical View, which satisfy users to view the cross-level dynamic trends and the overview hierarchical structure. In addition, we use the storyline metaphor to enable users to consolidate the fragmented information extracted across levels and topics and ultimately present it as a complete story. Case studies from real-world data confirm the capability of the TopicBubbler from different perspectives, including event mining across levels and topics, and fine-grained mining of specific topics to capture events hidden beneath the surface.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 41-56"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000372/pdfft?md5=85a43c9c0e54f4a8a3bdc84b5a0a856c&pid=1-s2.0-S2468502X23000372-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76676300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perspectives on point cloud-based 3D scene modeling and XR presentation within the cloud-edge-client architecture 基于点云的3D场景建模和云边缘客户端架构内的XR表示的观点
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-01 DOI: 10.1016/j.visinf.2023.06.007
Hongjia Wu , Hongxin Zhang , Jiang Cheng , Jianwei Guo , Wei Chen

With the support of edge computing, the synergy and collaboration among central cloud, edge cloud, and terminal devices form an integrated computing ecosystem known as the cloud-edge-client architecture. This integration unlocks the value of data and computational power, presenting significant opportunities for large-scale 3D scene modeling and XR presentation. In this paper, we explore the perspectives and highlight new challenges in 3D scene modeling and XR presentation based on point cloud within the cloud-edge-client integrated architecture. We also propose a novel cloud-edge-client integrated technology framework and a demonstration of municipal governance application to address these challenges.

在边缘计算的支持下,中央云、边缘云和终端设备之间的协同与协作形成了一个被称为云边缘客户端架构的集成计算生态系统。这种集成释放了数据和计算能力的价值,为大规模3D场景建模和XR演示提供了重要机会。在本文中,我们探索了云边缘客户端集成架构中基于点云的3D场景建模和XR表示的前景,并强调了新的挑战。我们还提出了一个新的云边缘客户端集成技术框架和市政治理应用程序演示,以应对这些挑战。
{"title":"Perspectives on point cloud-based 3D scene modeling and XR presentation within the cloud-edge-client architecture","authors":"Hongjia Wu ,&nbsp;Hongxin Zhang ,&nbsp;Jiang Cheng ,&nbsp;Jianwei Guo ,&nbsp;Wei Chen","doi":"10.1016/j.visinf.2023.06.007","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.007","url":null,"abstract":"<div><p>With the support of edge computing, the synergy and collaboration among central cloud, edge cloud, and terminal devices form an integrated computing ecosystem known as the cloud-edge-client architecture. This integration unlocks the value of data and computational power, presenting significant opportunities for large-scale 3D scene modeling and XR presentation. In this paper, we explore the perspectives and highlight new challenges in 3D scene modeling and XR presentation based on point cloud within the cloud-edge-client integrated architecture. We also propose a novel cloud-edge-client integrated technology framework and a demonstration of municipal governance application to address these challenges.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 59-64"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49708299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-scale visual analysis of cycle characteristics in spatially-embedded graphs 空间嵌入图中周期特征的多尺度可视化分析
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-01 DOI: 10.1016/j.visinf.2023.06.005
Farhan Rasheed , Talha Bin Masood , Tejas G. Murthy , Vijay Natarajan , Ingrid Hotz

We present a visual analysis environment based on a multi-scale partitioning of a 2d domain into regions bounded by cycles in weighted planar embedded graphs. The work has been inspired by an application in granular materials research, where the question of scale plays a fundamental role in the analysis of material properties. We propose an efficient algorithm to extract the hierarchical cycle structure using persistent homology. The core of the algorithm is a filtration on a dual graph exploiting Alexander’s duality. The resulting partitioning is the basis for the derivation of statistical properties that can be explored in a visual environment. We demonstrate the proposed pipeline on a few synthetic and one real-world dataset.

我们提出了一个视觉分析环境,该环境基于在加权平面嵌入图中将二维域多尺度划分为以循环为界的区域。这项工作受到了颗粒材料研究应用的启发,在颗粒材料研究中,尺度问题在材料特性分析中发挥着重要作用。我们提出了一种利用持久同源性提取层次循环结构的有效算法。该算法的核心是利用亚历山大对偶对对偶图进行过滤。由此产生的分区是可以在视觉环境中探索的统计特性的推导基础。我们在几个合成数据集和一个真实世界的数据集上演示了所提出的管道。
{"title":"Multi-scale visual analysis of cycle characteristics in spatially-embedded graphs","authors":"Farhan Rasheed ,&nbsp;Talha Bin Masood ,&nbsp;Tejas G. Murthy ,&nbsp;Vijay Natarajan ,&nbsp;Ingrid Hotz","doi":"10.1016/j.visinf.2023.06.005","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.005","url":null,"abstract":"<div><p>We present a visual analysis environment based on a multi-scale partitioning of a 2d domain into regions bounded by cycles in weighted planar embedded graphs. The work has been inspired by an application in granular materials research, where the question of scale plays a fundamental role in the analysis of material properties. We propose an efficient algorithm to extract the hierarchical cycle structure using persistent homology. The core of the algorithm is a filtration on a dual graph exploiting Alexander’s duality. The resulting partitioning is the basis for the derivation of statistical properties that can be explored in a visual environment. We demonstrate the proposed pipeline on a few synthetic and one real-world dataset.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 49-58"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49708298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualizing ordered bivariate data on node-link diagrams 在节点链接图上可视化有序的二元数据
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-01 DOI: 10.1016/j.visinf.2023.06.003
Osman Akbulut , Lucy McLaughlin , Tong Xin , Matthew Forshaw , Nicolas S. Holliman

Node-link visual representation is a widely used tool that allows decision-makers to see details about a network through the appropriate choice of visual metaphor. However, existing visualization methods are not always effective and efficient in representing bivariate graph-based data. This study proposes a novel node-link visual model – visual entropy (Vizent) graph – to effectively represent both primary and secondary values, such as uncertainty, on the edges simultaneously. We performed two user studies to demonstrate the efficiency and effectiveness of our approach in the context of static node-link diagrams. In the first experiment, we evaluated the performance of the Vizent design to determine if it performed equally well or better than existing alternatives in terms of response time and accuracy. Three static visual encodings that use two visual cues were selected from the literature for comparison: Width-Lightness, Saturation-Transparency, and Numerical values. We compared the Vizent design to the selected visual encodings on various graphs ranging in complexity from 5 to 25 edges for three different tasks. The participants achieved higher accuracy of their responses using Vizent and Numerical values; however, both Width-Lightness and Saturation-Transparency did not show equal performance for all tasks. Our results suggest that increasing graph size has no impact on Vizent in terms of response time and accuracy. The performance of the Vizent graph was then compared to the Numerical values visualization. The Wilcoxon signed-rank test revealed that mean response time in seconds was significantly less when the Vizent graphs were presented, while no significant difference in accuracy was found. The results from the experiments are encouraging and we believe justify using the Vizent graph as a good alternative to traditional methods for representing bivariate data in the context of node-link diagrams.

节点链接视觉表示是一种广泛使用的工具,它允许决策者通过适当选择视觉隐喻来查看网络的细节。然而,现有的可视化方法在表示基于二元图的数据方面并不总是有效和高效的。本研究提出了一种新的节点链接视觉模型——视觉熵(Vizent)图——以有效地同时表示边缘上的主要和次要值,如不确定性。我们进行了两项用户研究,以证明我们的方法在静态节点链接图的背景下的效率和有效性。在第一个实验中,我们评估了Vizent设计的性能,以确定它在响应时间和准确性方面是否与现有的替代方案一样好或更好。从文献中选择了三种使用两种视觉提示的静态视觉编码进行比较:宽度亮度、饱和度透明度和数值。我们将Vizent设计与三个不同任务的复杂度从5到25边的各种图上选择的视觉编码进行了比较。参与者使用Vizent和数值获得了更高的响应精度;然而,“宽度亮度”和“饱和度透明度”并没有在所有任务中显示出相同的性能。我们的结果表明,增加图形大小对Vizent的响应时间和准确性没有影响。然后将Vizent图的性能与数值可视化进行比较。Wilcoxon符号秩检验显示,当呈现Vizent图时,以秒为单位的平均响应时间显著缩短,而准确性没有发现显著差异。实验结果令人鼓舞,我们认为使用Vizent图作为在节点链接图中表示二变量数据的传统方法的良好替代方案是合理的。
{"title":"Visualizing ordered bivariate data on node-link diagrams","authors":"Osman Akbulut ,&nbsp;Lucy McLaughlin ,&nbsp;Tong Xin ,&nbsp;Matthew Forshaw ,&nbsp;Nicolas S. Holliman","doi":"10.1016/j.visinf.2023.06.003","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.003","url":null,"abstract":"<div><p>Node-link visual representation is a widely used tool that allows decision-makers to see details about a network through the appropriate choice of visual metaphor. However, existing visualization methods are not always effective and efficient in representing bivariate graph-based data. This study proposes a novel node-link visual model – visual entropy (Vizent) graph – to effectively represent both primary and secondary values, such as uncertainty, on the edges simultaneously. We performed two user studies to demonstrate the efficiency and effectiveness of our approach in the context of static node-link diagrams. In the first experiment, we evaluated the performance of the Vizent design to determine if it performed equally well or better than existing alternatives in terms of response time and accuracy. Three static visual encodings that use two visual cues were selected from the literature for comparison: Width-Lightness, Saturation-Transparency, and Numerical values. We compared the Vizent design to the selected visual encodings on various graphs ranging in complexity from 5 to 25 edges for three different tasks. The participants achieved higher accuracy of their responses using Vizent and Numerical values; however, both Width-Lightness and Saturation-Transparency did not show equal performance for all tasks. Our results suggest that increasing graph size has no impact on Vizent in terms of response time and accuracy. The performance of the Vizent graph was then compared to the Numerical values visualization. The Wilcoxon signed-rank test revealed that mean response time in seconds was significantly less when the Vizent graphs were presented, while no significant difference in accuracy was found. The results from the experiments are encouraging and we believe justify using the Vizent graph as a good alternative to traditional methods for representing bivariate data in the context of node-link diagrams.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 22-36"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49731815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PubExplorer: An interactive analytical system for visualizing publication data pubeexplorer:用于可视化出版物数据的交互式分析系统
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-01 DOI: 10.1016/j.visinf.2023.07.001
Minzhu Yu , Yang Wang , Xiaomin Yu , Guihua Shan , Zhong Jin

With the intersection and convergence of multiple disciplines and technologies, more and more researchers are actively exploring interdisciplinary cooperation outside their main research fields. Facing a new research field, researchers often hope to quickly learn what is being studied in this field, which research points are receiving high attention, which researchers are studying these research points, and then consider the possibility of collaborating with core researchers on these research points. In addition, students who are preparing for academic further education usually conduct research on mentors and mentors’ research platforms, including academic connections, employment opportunities, etc. In order to satisfy these requirements, we (1) design a research point state map based on a science map to help researchers and students understand the development state of a new research field; (2) design a bar-link author-affiliation information graph to help researchers and students clarify academic networks of scholars and find suitable collaborators or mentors; (3) designs citation pattern histogram to quickly discover research achievements with high research value, such as the Sleeping Beauty papers, recently hot papers, classic papers and so on. Finally, an interactive analytical system named PubExplorer was implemented with IEEE VIS publication data, and its effectiveness is verified through case studies.

随着多学科、多技术的交叉与融合,越来越多的研究者在各自主要研究领域之外积极探索跨学科合作。面对一个新的研究领域,研究人员往往希望快速了解该领域正在研究什么,哪些研究点受到高度关注,哪些研究人员正在研究这些研究点,然后考虑在这些研究点上与核心研究人员合作的可能性。此外,准备继续学业的学生通常在导师和导师的研究平台上进行研究,包括学术联系、就业机会等。为了满足这些要求,我们(1)在科学地图的基础上设计了一个研究点状态图,以帮助研究人员和学生了解一个新研究领域的发展状态;(2) 设计一个条形链接作者隶属关系信息图,帮助研究人员和学生澄清学者的学术网络,并找到合适的合作者或导师;(3) 设计引文模式直方图,快速发现具有较高研究价值的研究成果,如睡美人论文、最近的热门论文、经典论文等。最后,利用IEEE VIS出版数据实现了一个名为PubExplorer的交互式分析系统,并通过实例验证了其有效性。
{"title":"PubExplorer: An interactive analytical system for visualizing publication data","authors":"Minzhu Yu ,&nbsp;Yang Wang ,&nbsp;Xiaomin Yu ,&nbsp;Guihua Shan ,&nbsp;Zhong Jin","doi":"10.1016/j.visinf.2023.07.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.07.001","url":null,"abstract":"<div><p>With the intersection and convergence of multiple disciplines and technologies, more and more researchers are actively exploring interdisciplinary cooperation outside their main research fields. Facing a new research field, researchers often hope to quickly learn what is being studied in this field, which research points are receiving high attention, which researchers are studying these research points, and then consider the possibility of collaborating with core researchers on these research points. In addition, students who are preparing for academic further education usually conduct research on mentors and mentors’ research platforms, including academic connections, employment opportunities, etc. In order to satisfy these requirements, we (1) design a research point state map based on a science map to help researchers and students understand the development state of a new research field; (2) design a bar-link author-affiliation information graph to help researchers and students clarify academic networks of scholars and find suitable collaborators or mentors; (3) designs citation pattern histogram to quickly discover research achievements with high research value, such as the Sleeping Beauty papers, recently hot papers, classic papers and so on. Finally, an interactive analytical system named PubExplorer was implemented with IEEE VIS publication data, and its effectiveness is verified through case studies.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 65-74"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49708300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MEinVR: Multimodal interaction techniques in immersive exploration MEinVR:沉浸式探索中的多模式交互技术
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-01 DOI: 10.1016/j.visinf.2023.06.001
Ziyue Yuan, Shuqi He, Yu Liu, Lingyun Yu

Immersive environments have become increasingly popular for visualizing and exploring large-scale, complex scientific data because of their key features: immersion, engagement, and awareness. Virtual reality offers numerous new interaction possibilities, including tactile and tangible interactions, gestures, and voice commands. However, it is crucial to determine the most effective combination of these techniques for a more natural interaction experience. In this paper, we present MEinVR, a novel multimodal interaction technique for exploring 3D molecular data in virtual reality. MEinVR combines VR controller and voice input to provide a more intuitive way for users to manipulate data in immersive environments. By using the VR controller to select locations and regions of interest and voice commands to perform tasks, users can efficiently perform complex data exploration tasks. Our findings provide suggestions for the design of multimodal interaction techniques in 3D data exploration in virtual reality.

沉浸式环境在可视化和探索大规模、复杂的科学数据方面越来越受欢迎,因为它们的关键特征是:沉浸、参与和意识。虚拟现实提供了许多新的交互可能性,包括触觉和有形的交互、手势和语音命令。然而,至关重要的是要确定这些技术的最有效组合,以获得更自然的互动体验。在本文中,我们提出了MEinVR,这是一种在虚拟现实中探索3D分子数据的新型多模式交互技术。MEinVR结合了VR控制器和语音输入,为用户在身临其境的环境中操作数据提供了一种更直观的方式。通过使用VR控制器来选择感兴趣的位置和区域,并使用语音命令来执行任务,用户可以高效地执行复杂的数据探索任务。我们的发现为虚拟现实中三维数据探索中的多模态交互技术的设计提供了建议。
{"title":"MEinVR: Multimodal interaction techniques in immersive exploration","authors":"Ziyue Yuan,&nbsp;Shuqi He,&nbsp;Yu Liu,&nbsp;Lingyun Yu","doi":"10.1016/j.visinf.2023.06.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.001","url":null,"abstract":"<div><p>Immersive environments have become increasingly popular for visualizing and exploring large-scale, complex scientific data because of their key features: immersion, engagement, and awareness. Virtual reality offers numerous new interaction possibilities, including tactile and tangible interactions, gestures, and voice commands. However, it is crucial to determine the most effective combination of these techniques for a more natural interaction experience. In this paper, we present MEinVR, a novel multimodal interaction technique for exploring 3D molecular data in virtual reality. MEinVR combines VR controller and voice input to provide a more intuitive way for users to manipulate data in immersive environments. By using the VR controller to select locations and regions of interest and voice commands to perform tasks, users can efficiently perform complex data exploration tasks. Our findings provide suggestions for the design of multimodal interaction techniques in 3D data exploration in virtual reality.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 37-48"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49708297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CARVING-DETC: A network scaling and NMS ensemble for Balinese carving motif detection method 基于网络缩放与NMS的巴厘雕刻母题检测方法
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-01 DOI: 10.1016/j.visinf.2023.05.004
I Wayan Agus Surya Darma , Nanik Suciati , Daniel Siahaan

Balinese carvings are cultural objects that adorn sacred buildings. The carvings consist of several motifs, each representing the values adopted by the Balinese people. Detection of Balinese carving motifs is challenging due to the unavailability of a Balinese carving dataset for detection tasks, high variance, and tiny-size carving motifs. This research aims to improve carving motif detection performance on challenging Balinese carving motifs detection task through a modification of YOLOv5 to support a digital carving conservation system. We proposed CARVING-DETC, a deep learning-based Balinese carving detection method consisting of three steps. First, the data generation step performs data augmentation and annotation on Balinese carving images. Second, we proposed a network scaling strategy on the YOLOv5 model and performed non-maximum suppression (NMS) on the model ensemble to generate the most optimal predictions. The ensemble model utilizes NMS to produce higher performance by optimizing the detection results based on the highest confidence score and suppressing other overlap predictions with a lower confidence score. Third, performance evaluation on scaled-YOLOv5 versions and NMS ensemble models. The research findings are beneficial in conserving the cultural heritage and as a reference for other researchers. In addition, this study proposed a novel Balinese carving dataset through data collection, augmentation, and annotation. To our knowledge, it is the first Balinese carving dataset for the object detection task. Based on experimental results, CARVING-DETC achieved a detection performance of 98%, which outperforms the baseline model.

巴厘岛的雕刻是装饰神圣建筑的文化物品。这些雕刻品由几个图案组成,每个图案都代表了巴厘岛人民所采用的价值观。由于无法获得用于检测任务的巴厘岛雕刻数据集、高方差和微小的雕刻图案,巴厘岛雕刻图案的检测具有挑战性。本研究旨在通过对YOLOv5的修改来支持数字雕刻保护系统,从而提高具有挑战性的巴厘岛雕刻图案检测任务中的雕刻图案检测性能。我们提出了CARVING-DETC,这是一种基于深度学习的巴厘岛雕刻检测方法,由三个步骤组成。首先,数据生成步骤对巴厘岛雕刻图像进行数据扩充和注释。其次,我们在YOLOv5模型上提出了一种网络缩放策略,并对模型集成进行了非最大值抑制(NMS),以生成最优化的预测。集成模型利用NMS通过基于最高置信度得分优化检测结果并抑制具有较低置信度得分的其他重叠预测来产生更高的性能。第三,扩展YOLOv5版本和NMS集成模型的性能评估。研究结果有利于保护文化遗产,也可为其他研究人员提供参考。此外,本研究通过数据收集、扩充和注释,提出了一个新颖的巴厘岛雕刻数据集。据我们所知,这是第一个用于物体检测任务的巴厘岛雕刻数据集。基于实验结果,CARVING-DETC实现了98%的检测性能,优于基线模型。
{"title":"CARVING-DETC: A network scaling and NMS ensemble for Balinese carving motif detection method","authors":"I Wayan Agus Surya Darma ,&nbsp;Nanik Suciati ,&nbsp;Daniel Siahaan","doi":"10.1016/j.visinf.2023.05.004","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.05.004","url":null,"abstract":"<div><p>Balinese carvings are cultural objects that adorn sacred buildings. The carvings consist of several motifs, each representing the values adopted by the Balinese people. Detection of Balinese carving motifs is challenging due to the unavailability of a Balinese carving dataset for detection tasks, high variance, and tiny-size carving motifs. This research aims to improve carving motif detection performance on challenging Balinese carving motifs detection task through a modification of YOLOv5 to support a digital carving conservation system. We proposed CARVING-DETC, a deep learning-based Balinese carving detection method consisting of three steps. First, the data generation step performs data augmentation and annotation on Balinese carving images. Second, we proposed a network scaling strategy on the YOLOv5 model and performed non-maximum suppression (NMS) on the model ensemble to generate the most optimal predictions. The ensemble model utilizes NMS to produce higher performance by optimizing the detection results based on the highest confidence score and suppressing other overlap predictions with a lower confidence score. Third, performance evaluation on scaled-YOLOv5 versions and NMS ensemble models. The research findings are beneficial in conserving the cultural heritage and as a reference for other researchers. In addition, this study proposed a novel Balinese carving dataset through data collection, augmentation, and annotation. To our knowledge, it is the first Balinese carving dataset for the object detection task. Based on experimental results, CARVING-DETC achieved a detection performance of 98%, which outperforms the baseline model.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 1-10"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49731814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiview SVBRDF capture from unified shape and illumination 从统一的形状和照明的多视图SVBRDF捕获
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-09-01 DOI: 10.1016/j.visinf.2023.06.006
Liang Yuan, Issei Fujishiro

This paper proposes a stable method for reconstructing spatially varying appearances (SVBRDFs) from multiview images captured under casual lighting conditions. Unlike flat surface capture methods, ours can be applied to surfaces with complex silhouettes. The proposed method takes multiview images as inputs and outputs a unified SVBRDF estimation. We generated a large-scale dataset containing the multiview images, SVBRDFs, and lighting appearance of vast synthetic objects to train a two-stream hierarchical U-Net for SVBRDF estimation that is integrated into a differentiable rendering network for surface appearance reconstruction. In comparison with state-of-the-art approaches, our method produces SVBRDFs with lower biases for more casually captured images.

本文提出了一种从偶然照明条件下拍摄的多视点图像中重建空间变化外观(SVBRDF)的稳定方法。与平面捕捉方法不同,我们的方法可以应用于具有复杂轮廓的表面。该方法以多视点图像为输入,输出统一的SVBRDF估计。我们生成了一个包含多视点图像、SVBRDF和大型合成对象的照明外观的大规模数据集,以训练用于SVBRDF估计的双流层次U-Net,该U-Net集成到用于表面外观重建的可微分渲染网络中。与最先进的方法相比,我们的方法产生的SVBRDF对更随意捕捉的图像具有更低的偏差。
{"title":"Multiview SVBRDF capture from unified shape and illumination","authors":"Liang Yuan,&nbsp;Issei Fujishiro","doi":"10.1016/j.visinf.2023.06.006","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.006","url":null,"abstract":"<div><p>This paper proposes a stable method for reconstructing spatially varying appearances (SVBRDFs) from multiview images captured under casual lighting conditions. Unlike flat surface capture methods, ours can be applied to surfaces with complex silhouettes. The proposed method takes multiview images as inputs and outputs a unified SVBRDF estimation. We generated a large-scale dataset containing the multiview images, SVBRDFs, and lighting appearance of vast synthetic objects to train a two-stream hierarchical U-Net for SVBRDF estimation that is integrated into a differentiable rendering network for surface appearance reconstruction. In comparison with state-of-the-art approaches, our method produces SVBRDFs with lower biases for more casually captured images.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 11-21"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49708296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Visual Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1