首页 > 最新文献

Visual Informatics最新文献

英文 中文
DPKnob: A visual analysis approach to risk-aware formulation of differential privacy schemes for data query scenarios DPKnob:针对数据查询场景,采用可视化分析方法制定具有风险意识的差异化隐私方案
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 DOI: 10.1016/j.visinf.2024.09.002
Differential privacy is an essential approach for privacy preservation in data queries. However, users face a significant challenge in selecting an appropriate privacy scheme, as they struggle to balance the utility of query results with the preservation of diverse individual privacy. Customizing a privacy scheme becomes even more complex in dealing with queries that involve multiple data attributes. When adversaries attempt to breach privacy firewalls by conducting multiple regular data queries with various attribute values, data owners must arduously discern unpredictable disclosure risks and construct suitable privacy schemes. In this paper, we propose a visual analysis approach for formulating privacy schemes of differential privacy. Our approach supports the identification and simulation of potential privacy attacks in querying statistical results of multi-dimensional databases. We also developed a prototype system, called DPKnob, which integrates multiple coordinated views. DPKnob not only allows users to interactively assess and explore privacy exposure risks by browsing high-risk attacks, but also facilitates an iterative process for formulating and optimizing privacy schemes based on differential privacy. This iterative process allows users to compare different schemes, refine their expectations of privacy and utility, and ultimately establish a well-balanced privacy scheme. The effectiveness of this study is verified by a user study and two case studies with real-world datasets.
差异隐私是数据查询中保护隐私的重要方法。然而,用户在选择合适的隐私方案时面临着巨大的挑战,因为他们很难在查询结果的实用性和保护不同个人隐私之间取得平衡。在处理涉及多个数据属性的查询时,定制隐私方案变得更加复杂。当对手试图通过使用各种属性值进行多个常规数据查询来突破隐私防火墙时,数据所有者必须努力辨别不可预测的泄露风险,并构建合适的隐私方案。在本文中,我们提出了一种可视化分析方法,用于制定差分隐私方案。我们的方法支持在查询多维数据库统计结果时识别和模拟潜在的隐私攻击。我们还开发了一个名为 DPKnob 的原型系统,该系统集成了多个协调视图。DPKnob 不仅允许用户通过浏览高风险攻击来交互式地评估和探索隐私暴露风险,还促进了基于差异隐私制定和优化隐私方案的迭代过程。这种迭代过程允许用户比较不同的方案,完善他们对隐私和效用的预期,并最终建立一个平衡的隐私方案。这项研究的有效性通过一项用户研究和两项真实数据集案例研究得到了验证。
{"title":"DPKnob: A visual analysis approach to risk-aware formulation of differential privacy schemes for data query scenarios","authors":"","doi":"10.1016/j.visinf.2024.09.002","DOIUrl":"10.1016/j.visinf.2024.09.002","url":null,"abstract":"<div><div>Differential privacy is an essential approach for privacy preservation in data queries. However, users face a significant challenge in selecting an appropriate privacy scheme, as they struggle to balance the utility of query results with the preservation of diverse individual privacy. Customizing a privacy scheme becomes even more complex in dealing with queries that involve multiple data attributes. When adversaries attempt to breach privacy firewalls by conducting multiple regular data queries with various attribute values, data owners must arduously discern unpredictable disclosure risks and construct suitable privacy schemes. In this paper, we propose a visual analysis approach for formulating privacy schemes of differential privacy. Our approach supports the identification and simulation of potential privacy attacks in querying statistical results of multi-dimensional databases. We also developed a prototype system, called DPKnob, which integrates multiple coordinated views. DPKnob not only allows users to interactively assess and explore privacy exposure risks by browsing high-risk attacks, but also facilitates an iterative process for formulating and optimizing privacy schemes based on differential privacy. This iterative process allows users to compare different schemes, refine their expectations of privacy and utility, and ultimately establish a well-balanced privacy scheme. The effectiveness of this study is verified by a user study and two case studies with real-world datasets.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MILG: Realistic lip-sync video generation with audio-modulated image inpainting MILG:利用音频调制图像绘制生成逼真的唇音同步视频
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 DOI: 10.1016/j.visinf.2024.08.002
Existing lip synchronization (lip-sync) methods generate accurately synchronized mouths and faces in a generated video. However, they still confront the problem of artifacts in regions of non-interest (RONI), e.g., background and other parts of a face, which decreases the overall visual quality. To solve these problems, we innovatively introduce diverse image inpainting to lip-sync generation. We propose Modulated Inpainting Lip-sync GAN (MILG), an audio-constraint inpainting network to predict synchronous mouths. MILG utilizes prior knowledge of RONI and audio sequences to predict lip shape instead of image generation, which can keep the RONI consistent. Specifically, we integrate modulated spatially probabilistic diversity normalization (MSPD Norm) in our inpainting network, which helps the network generate fine-grained diverse mouth movements guided by the continuous audio features. Furthermore, to lower the training overhead, we modify the contrastive loss in lip-sync to support small-batch-size and few-sample training. Extensive experiments demonstrate that our approach outperforms the existing state-of-the-art of image quality and authenticity while keeping lip-sync.
现有的唇部同步(lip-sync)方法能在生成的视频中准确同步嘴部和面部。然而,这些方法仍然面临着非感兴趣区域(RONI)的伪像问题,例如背景和脸部的其他部分,从而降低了整体视觉质量。为了解决这些问题,我们创新性地将多样化的图像绘制引入到唇音生成中。我们提出了调制内绘唇同步 GAN(MILG),这是一种音频约束内绘网络,用于预测同步口型。MILG 利用 RONI 和音频序列的先验知识来预测唇形,而不是生成图像,这样可以保持 RONI 的一致性。具体来说,我们将调制空间概率多样性归一化(MSPD Norm)集成到我们的内绘制网络中,这有助于网络在连续音频特征的引导下生成细粒度的多样化嘴部动作。此外,为了降低训练开销,我们修改了唇部同步中的对比度损失,以支持小批量和少样本训练。大量实验证明,我们的方法在保持唇语同步的同时,在图像质量和真实性方面都优于现有的最先进方法。
{"title":"MILG: Realistic lip-sync video generation with audio-modulated image inpainting","authors":"","doi":"10.1016/j.visinf.2024.08.002","DOIUrl":"10.1016/j.visinf.2024.08.002","url":null,"abstract":"<div><div>Existing lip synchronization (lip-sync) methods generate accurately synchronized mouths and faces in a generated video. However, they still confront the problem of artifacts in regions of non-interest (RONI), <em>e.g.</em>, background and other parts of a face, which decreases the overall visual quality. To solve these problems, we innovatively introduce diverse image inpainting to lip-sync generation. We propose Modulated Inpainting Lip-sync GAN (MILG), an audio-constraint inpainting network to predict synchronous mouths. MILG utilizes prior knowledge of RONI and audio sequences to predict lip shape instead of image generation, which can keep the RONI consistent. Specifically, we integrate modulated spatially probabilistic diversity normalization (MSPD Norm) in our inpainting network, which helps the network generate fine-grained diverse mouth movements guided by the continuous audio features. Furthermore, to lower the training overhead, we modify the contrastive loss in lip-sync to support small-batch-size and few-sample training. Extensive experiments demonstrate that our approach outperforms the existing state-of-the-art of image quality and authenticity while keeping lip-sync.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AvatarWild: Fully controllable head avatars in the wild AvatarWild:完全可控的野生头像
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 DOI: 10.1016/j.visinf.2024.09.001
Recent advancements in the field have resulted in significant progress in achieving realistic head reconstruction and manipulation using neural radiance fields (NeRF). Despite these advances, capturing intricate facial details remains a persistent challenge. Moreover, casually captured input, involving both head poses and camera movements, introduces additional difficulties to existing methods of head avatar reconstruction. To address the challenge posed by video data captured with camera motion, we propose a novel method, AvatarWild, for reconstructing head avatars from monocular videos taken by consumer devices. Notably, our approach decouples the camera pose and head pose, allowing reconstructed avatars to be visualized with different poses and expressions from novel viewpoints. To enhance the visual quality of the reconstructed facial avatar, we introduce a view-dependent detail enhancement module designed to augment local facial details without compromising viewpoint consistency. Our method demonstrates superior performance compared to existing approaches, as evidenced by reconstruction and animation results on both multi-view and single-view datasets. Remarkably, our approach stands out by exclusively relying on video data captured by portable devices, such as smartphones. This not only underscores the practicality of our method but also extends its applicability to real-world scenarios where accessibility and ease of data capture are crucial.
该领域的最新进展是利用神经辐射场(NeRF)实现逼真的头部重建和操作。尽管取得了这些进展,但捕捉复杂的面部细节仍是一项长期挑战。此外,随手捕捉的输入信息涉及头部姿势和摄像机运动,给现有的头像重建方法带来了更多困难。为了应对相机运动视频数据带来的挑战,我们提出了一种新方法 AvatarWild,用于从消费类设备拍摄的单目视频中重建头像。值得注意的是,我们的方法将摄像机姿势和头部姿势分离开来,允许从新的视角以不同的姿势和表情可视化重建的头像。为了提高重建后的面部头像的视觉质量,我们引入了一个视图相关细节增强模块,旨在增强局部面部细节而不影响视点一致性。我们的方法在多视角和单视角数据集上的重建和动画结果表明,与现有方法相比,我们的方法具有更优越的性能。值得注意的是,我们的方法完全依赖于智能手机等便携设备捕获的视频数据,因此脱颖而出。这不仅强调了我们方法的实用性,还将其适用范围扩展到了对数据获取的可及性和便捷性至关重要的现实世界场景中。
{"title":"AvatarWild: Fully controllable head avatars in the wild","authors":"","doi":"10.1016/j.visinf.2024.09.001","DOIUrl":"10.1016/j.visinf.2024.09.001","url":null,"abstract":"<div><div>Recent advancements in the field have resulted in significant progress in achieving realistic head reconstruction and manipulation using neural radiance fields (NeRF). Despite these advances, capturing intricate facial details remains a persistent challenge. Moreover, casually captured input, involving both head poses and camera movements, introduces additional difficulties to existing methods of head avatar reconstruction. To address the challenge posed by video data captured with camera motion, we propose a novel method, AvatarWild, for reconstructing head avatars from monocular videos taken by consumer devices. Notably, our approach decouples the camera pose and head pose, allowing reconstructed avatars to be visualized with different poses and expressions from novel viewpoints. To enhance the visual quality of the reconstructed facial avatar, we introduce a view-dependent detail enhancement module designed to augment local facial details without compromising viewpoint consistency. Our method demonstrates superior performance compared to existing approaches, as evidenced by reconstruction and animation results on both multi-view and single-view datasets. Remarkably, our approach stands out by exclusively relying on video data captured by portable devices, such as smartphones. This not only underscores the practicality of our method but also extends its applicability to real-world scenarios where accessibility and ease of data capture are crucial.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual exploration of multi-dimensional data via rule-based sample embedding 通过基于规则的样本嵌入对多维数据进行可视化探索
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 DOI: 10.1016/j.visinf.2024.09.005
We propose an approach to learning sample embedding for analyzing multi-dimensional datasets. The basic idea is to extract rules from the given dataset and learn the embedding for each sample based on the rules it satisfies. The approach can filter out pattern-irrelevant attributes, leading to significant visual structures of samples satisfying the same rules in the projection. In addition, analysts can understand a visual structure based on the rules that the involved samples satisfy, which improves the projection’s pattern interpretability. Our research involves two methods for achieving and applying the approach. First, we give a method to learn rule-based embedding for each sample. Second, we integrate the method into a system to achieve an analytical workflow. Cases on real-world dataset and quantitative experiment results show the usability and effectiveness of our approach.
我们提出了一种用于分析多维数据集的样本嵌入学习方法。其基本思想是从给定数据集中提取规则,并根据每个样本所满足的规则学习其嵌入。这种方法可以过滤掉与模式无关的属性,从而在投影中获得满足相同规则的样本的重要视觉结构。此外,分析人员可以根据相关样本满足的规则来理解视觉结构,从而提高投影的模式可解释性。我们的研究涉及实现和应用该方法的两种方法。首先,我们给出了一种为每个样本学习基于规则的嵌入的方法。其次,我们将该方法集成到一个系统中,以实现分析工作流程。真实世界数据集上的案例和定量实验结果表明了我们方法的可用性和有效性。
{"title":"Visual exploration of multi-dimensional data via rule-based sample embedding","authors":"","doi":"10.1016/j.visinf.2024.09.005","DOIUrl":"10.1016/j.visinf.2024.09.005","url":null,"abstract":"<div><div>We propose an approach to learning sample embedding for analyzing multi-dimensional datasets. The basic idea is to extract rules from the given dataset and learn the embedding for each sample based on the rules it satisfies. The approach can filter out pattern-irrelevant attributes, leading to significant visual structures of samples satisfying the same rules in the projection. In addition, analysts can understand a visual structure based on the rules that the involved samples satisfy, which improves the projection’s pattern interpretability. Our research involves two methods for achieving and applying the approach. First, we give a method to learn rule-based embedding for each sample. Second, we integrate the method into a system to achieve an analytical workflow. Cases on real-world dataset and quantitative experiment results show the usability and effectiveness of our approach.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RelicCARD: Enhancing cultural relics exploration through semantics-based augmented reality tangible interaction design RelicCARD:通过基于语义的增强现实有形交互设计加强文物探索
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 DOI: 10.1016/j.visinf.2024.06.003
Cultural relics visualization brings digital archives of relics to broader audiences in many applications, such as education, historical research, and virtual museums. However, previous research mainly focused on modeling and rendering the relics. While enhancing accessibility, these techniques still provide limited ability to improve user engagement. In this paper, we introduce RelicCARD, a semantics-based augmented reality (AR) tangible interaction design for exploring cultural relics. Our design uses an easily available tangible interface to encourage the users to interact with a large collection of relics. The tangible interface allows users to explore, select, and arrange relics to form customized scenes. To guide the design of the interface, we formalize a design space by connecting the semantics in relics, the tangible interaction patterns, and the exploration tasks. We realize the design space as a tangible interactive prototype and examine its feasibility and effectiveness using multiple case studies and an expert evaluation. Finally, we discuss the findings in the evaluation and future directions to improve the design and implementation of the interactive design space.
文物可视化在教育、历史研究和虚拟博物馆等许多应用领域为更广泛的受众带来了文物数字档案。然而,以往的研究主要集中在文物建模和渲染方面。这些技术虽然提高了可访问性,但在提高用户参与度方面仍然能力有限。在本文中,我们介绍了一种基于语义的增强现实(AR)有形交互设计--RelicCARD,用于探索文物。我们的设计使用易于获取的有形界面来鼓励用户与大量文物进行互动。有形界面允许用户探索、选择和排列文物,以形成自定义场景。为了指导界面的设计,我们将文物语义、有形交互模式和探索任务联系起来,形成了一个形式化的设计空间。我们以有形交互原型的形式实现了设计空间,并通过多个案例研究和专家评估检验了其可行性和有效性。最后,我们讨论了评估结果和未来方向,以改进交互设计空间的设计和实施。
{"title":"RelicCARD: Enhancing cultural relics exploration through semantics-based augmented reality tangible interaction design","authors":"","doi":"10.1016/j.visinf.2024.06.003","DOIUrl":"10.1016/j.visinf.2024.06.003","url":null,"abstract":"<div><div>Cultural relics visualization brings digital archives of relics to broader audiences in many applications, such as education, historical research, and virtual museums. However, previous research mainly focused on modeling and rendering the relics. While enhancing accessibility, these techniques still provide limited ability to improve user engagement. In this paper, we introduce RelicCARD, a semantics-based augmented reality (AR) tangible interaction design for exploring cultural relics. Our design uses an easily available tangible interface to encourage the users to interact with a large collection of relics. The tangible interface allows users to explore, select, and arrange relics to form customized scenes. To guide the design of the interface, we formalize a design space by connecting the semantics in relics, the tangible interaction patterns, and the exploration tasks. We realize the design space as a tangible interactive prototype and examine its feasibility and effectiveness using multiple case studies and an expert evaluation. Finally, we discuss the findings in the evaluation and future directions to improve the design and implementation of the interactive design space.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141839325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RenderKernel: High-level programming for real-time rendering systems RenderKernel:实时渲染系统的高级编程
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 DOI: 10.1016/j.visinf.2024.09.004
Real-time rendering applications leverage heterogeneous computing to optimize performance. However, software development across multiple devices presents challenges, including data layout inconsistencies, synchronization issues, resource management complexities, and architectural disparities. Additionally, the creation of such systems requires verbose and unsafe programming models. Recent developments in domain-specific and unified shading languages aim to mitigate these issues. Yet, current programming models primarily address data layout consistency, neglecting other persistent challenges.In this paper, we introduce RenderKernel, a programming model designed to simplify the development of real-time rendering systems. Recognizing the need for a high-level approach, RenderKernel addresses the specific challenges of real-time rendering, enabling development on heterogeneous systems as if they were homogeneous. This model allows for early detection and prevention of errors due to system heterogeneity at compile-time. Furthermore, RenderKernel enables the use of common programming patterns from homogeneous environments, freeing developers from the complexities of underlying heterogeneous systems. Developers can focus on coding unique application features, thereby enhancing productivity and reducing the cognitive load associated with real-time rendering system development.
实时渲染应用利用异构计算来优化性能。然而,跨多种设备的软件开发面临着各种挑战,包括数据布局不一致、同步问题、资源管理复杂性和架构差异。此外,创建此类系统还需要冗长且不安全的编程模型。针对特定领域的统一着色语言的最新发展旨在缓解这些问题。在本文中,我们介绍了 RenderKernel,这是一种旨在简化实时渲染系统开发的编程模型。RenderKernel 认识到高层次方法的必要性,解决了实时渲染的特殊挑战,使异构系统的开发如同同构系统。这种模式可以在编译时及早发现和防止由于系统异构造成的错误。此外,RenderKernel 还能使用同构环境中的通用编程模式,将开发人员从底层异构系统的复杂性中解放出来。开发人员可以专注于编码独特的应用功能,从而提高生产率,减少与实时渲染系统开发相关的认知负荷。
{"title":"RenderKernel: High-level programming for real-time rendering systems","authors":"","doi":"10.1016/j.visinf.2024.09.004","DOIUrl":"10.1016/j.visinf.2024.09.004","url":null,"abstract":"<div><div>Real-time rendering applications leverage heterogeneous computing to optimize performance. However, software development across multiple devices presents challenges, including data layout inconsistencies, synchronization issues, resource management complexities, and architectural disparities. Additionally, the creation of such systems requires verbose and unsafe programming models. Recent developments in domain-specific and unified shading languages aim to mitigate these issues. Yet, current programming models primarily address data layout consistency, neglecting other persistent challenges.In this paper, we introduce RenderKernel, a programming model designed to simplify the development of real-time rendering systems. Recognizing the need for a high-level approach, RenderKernel addresses the specific challenges of real-time rendering, enabling development on heterogeneous systems as if they were homogeneous. This model allows for early detection and prevention of errors due to system heterogeneity at compile-time. Furthermore, RenderKernel enables the use of common programming patterns from homogeneous environments, freeing developers from the complexities of underlying heterogeneous systems. Developers can focus on coding unique application features, thereby enhancing productivity and reducing the cognitive load associated with real-time rendering system development.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Demers cartogram with rivers 带有河流的 Demers 地图
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 DOI: 10.1016/j.visinf.2024.09.003
Cartograms serve as representations of geographical and abstract data, employing a value-by-area mapping technique. As a variant of the Dorling cartogram, the Demers cartogram utilizes squares instead of circles to represent regions. This alternative approach allows for a more intuitive comparison of regions, utilizing screen space more efficiently. However, a drawback of the Dorling cartogram and its variants lies in the potential displacement of regions from their original positions, ultimately compromising legibility, readability, and accuracy. To tackle this limitation, we propose a novel hybrid cartogram layout algorithm that incorporates topological elements, such as rivers, into Demers cartograms. The presence of rivers significantly impacts both the layout and visual appearance of the cartograms. Through a user study conducted on an Electronic Health Records (EHR) dataset, we evaluate the efficacy of the proposed hybrid layout algorithm. The obtained results illustrate that this approach successfully retains key aspects of the original cartogram while enhancing legibility, readability, and overall accuracy.
制图是地理和抽象数据的表示方法,采用的是逐值制图技术。作为多林制图的一种变体,戴莫斯制图使用方形而不是圆形来表示区域。这种替代方法可以更直观地比较区域,更有效地利用屏幕空间。然而,多林制图及其变体的一个缺点是可能会使区域偏离其原始位置,最终影响可读性、可读性和准确性。为了解决这一局限性,我们提出了一种新颖的混合制图布局算法,将河流等拓扑元素纳入德默斯制图中。河流的存在极大地影响了制图的布局和视觉效果。通过对电子健康记录(EHR)数据集进行用户研究,我们评估了所提出的混合布局算法的功效。研究结果表明,这种方法在提高可读性、可读性和整体准确性的同时,还成功保留了原始制图的关键部分。
{"title":"Demers cartogram with rivers","authors":"","doi":"10.1016/j.visinf.2024.09.003","DOIUrl":"10.1016/j.visinf.2024.09.003","url":null,"abstract":"<div><div>Cartograms serve as representations of geographical and abstract data, employing a value-by-area mapping technique. As a variant of the Dorling cartogram, the Demers cartogram utilizes squares instead of circles to represent regions. This alternative approach allows for a more intuitive comparison of regions, utilizing screen space more efficiently. However, a drawback of the Dorling cartogram and its variants lies in the potential displacement of regions from their original positions, ultimately compromising legibility, readability, and accuracy. To tackle this limitation, we propose a novel hybrid cartogram layout algorithm that incorporates topological elements, such as rivers, into Demers cartograms. The presence of rivers significantly impacts both the layout and visual appearance of the cartograms. Through a user study conducted on an Electronic Health Records (EHR) dataset, we evaluate the efficacy of the proposed hybrid layout algorithm. The obtained results illustrate that this approach successfully retains key aspects of the original cartogram while enhancing legibility, readability, and overall accuracy.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JobViz: Skill-driven visual exploration of job advertisements JobViz:以技能为导向的招聘广告可视化探索
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 DOI: 10.1016/j.visinf.2024.07.001

Online job advertisements on various job portals or websites have become the most popular way for people to find potential career opportunities nowadays. However, the majority of these job sites are limited to offering fundamental filters such as job titles, keywords, and compensation ranges. This often poses a challenge for job seekers in efficiently identifying relevant job advertisements that align with their unique skill sets amidst a vast sea of listings. Thus, we propose well-coordinated visualizations to provide job seekers with three levels of details of job information: a skill-job overview visualizes skill sets, employment posts as well as relationships between them with a hierarchical visualization design; a post exploration view leverages an augmented radar-chart glyph to represent job posts and further facilitates users’ swift comprehension of the pertinent skills necessitated by respective positions; a post detail view lists the specifics of selected job posts for profound analysis and comparison. By using a real-world recruitment advertisement dataset collected from 51Job, one of the largest job websites in China, we conducted two case studies and user interviews to evaluate JobViz. The results demonstrated the usefulness and effectiveness of our approach.

各种招聘门户网站或网站上的在线招聘广告已成为时下人们寻找潜在职业机会的最流行方式。然而,这些招聘网站大多仅限于提供基本的筛选条件,如职位名称、关键字和薪酬范围。这往往会给求职者带来挑战,使他们难以在浩如烟海的招聘广告中有效识别与其独特技能相符的相关招聘广告。因此,我们提出了协调良好的可视化方法,为求职者提供三个层次的详细职位信息:技能-职位概览采用分层可视化设计,将技能组合、招聘职位以及它们之间的关系可视化;职位探索视图利用增强的雷达图字形来表示招聘职位,进一步帮助用户快速理解各个职位所需的相关技能;职位详情视图列出了所选招聘职位的具体内容,以便进行深入分析和比较。通过使用从中国最大的招聘网站之一 51Job 收集的真实招聘广告数据集,我们进行了两项案例研究和用户访谈,以评估 JobViz。结果证明了我们的方法的实用性和有效性。
{"title":"JobViz: Skill-driven visual exploration of job advertisements","authors":"","doi":"10.1016/j.visinf.2024.07.001","DOIUrl":"10.1016/j.visinf.2024.07.001","url":null,"abstract":"<div><p>Online job advertisements on various job portals or websites have become the most popular way for people to find potential career opportunities nowadays. However, the majority of these job sites are limited to offering fundamental filters such as job titles, keywords, and compensation ranges. This often poses a challenge for job seekers in efficiently identifying relevant job advertisements that align with their unique skill sets amidst a vast sea of listings. Thus, we propose well-coordinated visualizations to provide job seekers with three levels of details of job information: a skill-job overview visualizes skill sets, employment posts as well as relationships between them with a hierarchical visualization design; a post exploration view leverages an augmented radar-chart glyph to represent job posts and further facilitates users’ swift comprehension of the pertinent skills necessitated by respective positions; a post detail view lists the specifics of selected job posts for profound analysis and comparison. By using a real-world recruitment advertisement dataset collected from 51Job, one of the largest job websites in China, we conducted two case studies and user interviews to evaluate <em>JobViz</em>. The results demonstrated the usefulness and effectiveness of our approach.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X24000391/pdfft?md5=62d1e06a4ba3529c504c7ac24e65e000&pid=1-s2.0-S2468502X24000391-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141843719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual evaluation of graph representation learning based on the presentation of community structures 基于群落结构呈现的图形表示学习可视化评估
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 DOI: 10.1016/j.visinf.2024.08.001
Various graph representation learning models convert graph nodes into vectors using techniques like matrix factorization, random walk, and deep learning. However, choosing the right method for different tasks can be challenging. Communities within networks help reveal underlying structures and correlations. Investigating how different models preserve community properties is crucial for identifying the best graph representation for data analysis. This paper defines indicators to explore the perceptual quality of community properties in representation learning spaces, including the consistency of community structure, node distribution within and between communities, and central node distribution. A visualization system presents these indicators, allowing users to evaluate models based on community structures. Case studies demonstrate the effectiveness of the indicators for the visual evaluation of graph representation learning models.
各种图表示学习模型使用矩阵因式分解、随机漫步和深度学习等技术将图节点转换为向量。然而,为不同的任务选择正确的方法可能具有挑战性。网络中的群落有助于揭示潜在的结构和相关性。研究不同模型如何保留社群属性,对于确定数据分析的最佳图表示法至关重要。本文定义了一些指标,用于探索表征学习空间中群落属性的感知质量,包括群落结构的一致性、群落内部和群落之间的节点分布以及中心节点分布。一个可视化系统展示了这些指标,使用户能够根据社群结构对模型进行评估。案例研究证明了这些指标对图形表征学习模型进行可视化评估的有效性。
{"title":"Visual evaluation of graph representation learning based on the presentation of community structures","authors":"","doi":"10.1016/j.visinf.2024.08.001","DOIUrl":"10.1016/j.visinf.2024.08.001","url":null,"abstract":"<div><div>Various graph representation learning models convert graph nodes into vectors using techniques like matrix factorization, random walk, and deep learning. However, choosing the right method for different tasks can be challenging. Communities within networks help reveal underlying structures and correlations. Investigating how different models preserve community properties is crucial for identifying the best graph representation for data analysis. This paper defines indicators to explore the perceptual quality of community properties in representation learning spaces, including the consistency of community structure, node distribution within and between communities, and central node distribution. A visualization system presents these indicators, allowing users to evaluate models based on community structures. Case studies demonstrate the effectiveness of the indicators for the visual evaluation of graph representation learning models.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142320122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VisAhoi: Towards a library to generate and integrate visualization onboarding using high-level visualization grammars VisAhoi:使用高级可视化语法生成和集成可视化入门库
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-06-27 DOI: 10.1016/j.visinf.2024.06.001

Visualization onboarding supports users in reading, interpreting, and extracting information from visual data representations. General-purpose onboarding tools and libraries are applicable for explaining a wide range of graphical user interfaces but cannot handle specific visualization requirements. This paper describes a first step towards developing an onboarding library called VisAhoi, which is easy to integrate, extend, semi-automate, reuse, and customize. VisAhoi supports the creation of onboarding elements for different visualization types and datasets. We demonstrate how to extract and describe onboarding instructions using three well-known high-level descriptive visualization grammars — Vega-Lite, Plotly.js, and ECharts. We show the applicability of our library by performing two usage scenarios that describe the integration of VisAhoi into a VA tool for the analysis of high-throughput screening (HTS) data and, second, into a Flourish template to provide an authoring tool for data journalists for a treemap visualization. We provide a supplementary website (https://datavisyn.github.io/visAhoi/) that demonstrates the applicability of VisAhoi to various visualizations, including a bar chart, a horizon graph, a change matrix/heatmap, a scatterplot, and a treemap visualization.

可视化上机支持用户从可视化数据表示中阅读、解释和提取信息。通用上机工具和库适用于解释各种图形用户界面,但无法处理特定的可视化需求。本文介绍了开发名为 VisAhoi 的上机库的第一步,该库易于集成、扩展、半自动化、重用和定制。VisAhoi 支持为不同的可视化类型和数据集创建上机元素。我们演示了如何使用 Vega-Lite、Plotly.js 和 ECharts 这三种著名的高级描述性可视化语法提取和描述上机指令。我们通过两个使用场景展示了我们库的适用性,一个场景是将 VisAhoi 集成到用于分析高通量筛选(HTS)数据的 VA 工具中,另一个场景是将 VisAhoi 集成到 Flourish 模板中,为数据记者提供树状图可视化的创作工具。我们提供了一个补充网站 (https://datavisyn.github.io/visAhoi/),该网站演示了 VisAhoi 对各种可视化的适用性,包括条形图、地平线图、变化矩阵/热图、散点图和树状地图可视化。
{"title":"VisAhoi: Towards a library to generate and integrate visualization onboarding using high-level visualization grammars","authors":"","doi":"10.1016/j.visinf.2024.06.001","DOIUrl":"10.1016/j.visinf.2024.06.001","url":null,"abstract":"<div><p>Visualization onboarding supports users in reading, interpreting, and extracting information from visual data representations. General-purpose onboarding tools and libraries are applicable for explaining a wide range of graphical user interfaces but cannot handle specific visualization requirements. This paper describes a first step towards developing an onboarding library called VisAhoi, which is easy to <em>integrate, extend, semi-automate, reuse, and customize</em>. VisAhoi supports the creation of onboarding elements for different visualization types and datasets. We demonstrate how to extract and describe onboarding instructions using three well-known high-level descriptive visualization grammars — Vega-Lite, Plotly.js, and ECharts. We show the applicability of our library by performing two usage scenarios that describe the integration of VisAhoi into a VA tool for the analysis of high-throughput screening (HTS) data and, second, into a Flourish template to provide an authoring tool for data journalists for a treemap visualization. We provide a supplementary website (<span><span>https://datavisyn.github.io/visAhoi/</span><svg><path></path></svg></span>) that demonstrates the applicability of VisAhoi to various visualizations, including a bar chart, a horizon graph, a change matrix/heatmap, a scatterplot, and a treemap visualization.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.8,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X24000214/pdfft?md5=b500608cf3b6d6a02fdc48334024bff3&pid=1-s2.0-S2468502X24000214-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141954338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Visual Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1