首页 > 最新文献

Visual Informatics最新文献

英文 中文
A fine-grained deconfounding study for knowledge-based visual dialog 基于知识的视觉对话的细粒度解构研究
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 Epub Date: 2024-10-04 DOI: 10.1016/j.visinf.2024.09.007
An-An Liu , Quanhan Wu , Chenxi Huang , Chao Xue , Xianzhu Liu , Ning Xu
Knowledge-based Visual Dialog is a challenging vision-language task, where an agent engages in dialog to answer questions with humans based on the input image and corresponding commonsense knowledge. The debiasing methods based on causal graphs have gradually sparked much attention in the field of Visual Dialog (VD), yielding impressive achievements. However, existing studies focus on the coarse-grained deconfounding, which lacks a principled analysis of the bias. In this paper, we propose a fined-grained study of deconfounding on: (1) We define the confounder from two perspectives. The first is user preference (denoted as Uh), derived from human-annotated dialog history, which may introduce spurious correlations between questions and answers. The second is commonsense language bias (denoted as Uc), where certain words appear so frequently in the retrieved commonsense knowledge that the model tends to memorize these patterns, thereby establishing spurious correlations between the commonsense knowledge and the answers. (2) Given that the current question directly influences answer generation, we further decompose the confounders into Uh1,Uh2 and Uc1,Uc2, based on their relevance to the current question. Specifically, Uh1 and Uc1 represent dialog history and high-frequency words that are highly correlated with the current question, while Uh2 and Uc2 are sampled from dialog history and words with low relevance to the current question. Through a comprehensive evaluation and comparison of all components, we demonstrate the necessity of jointly considering both Uh and Uc. Fine-grained deconfounding, particularly with respect to the current question, proves to be more effective. Ablation studies, quantitative results, and visualizations further confirm the effectiveness of the proposed method.
基于知识的视觉对话是一项具有挑战性的视觉语言任务,其中智能体根据输入的图像和相应的常识知识与人类进行对话以回答问题。基于因果图的去偏方法逐渐引起了视觉对话领域的广泛关注,并取得了令人瞩目的成就。然而,现有的研究主要集中在粗粒度的反发现上,缺乏对偏差的原则性分析。在本文中,我们提出了一个细粒度的反奠基研究:(1)我们从两个角度定义混淆。第一个是用户偏好(表示为Uh),它来源于人类注释的对话历史,这可能会在问题和答案之间引入虚假的相关性。第二种是常识性语言偏差(表示为Uc),其中某些单词在检索的常识性知识中出现得如此频繁,以至于模型倾向于记住这些模式,从而在常识性知识和答案之间建立虚假的相关性。(2)考虑到当前问题直接影响答案的生成,我们根据与当前问题的相关性,将混杂因素进一步分解为Uh1,Uh2和Uc1,Uc2。其中,Uh1和Uc1代表对话历史和与当前问题高度相关的高频词,Uh2和Uc2代表对话历史和与当前问题相关性较低的词。通过对所有组成部分的综合评估和比较,我们证明了联合考虑Uh和Uc的必要性。细粒度的拆解,特别是对于当前的问题,被证明是更有效的。消融研究、定量结果和可视化进一步证实了所提出方法的有效性。
{"title":"A fine-grained deconfounding study for knowledge-based visual dialog","authors":"An-An Liu ,&nbsp;Quanhan Wu ,&nbsp;Chenxi Huang ,&nbsp;Chao Xue ,&nbsp;Xianzhu Liu ,&nbsp;Ning Xu","doi":"10.1016/j.visinf.2024.09.007","DOIUrl":"10.1016/j.visinf.2024.09.007","url":null,"abstract":"<div><div>Knowledge-based Visual Dialog is a challenging vision-language task, where an agent engages in dialog to answer questions with humans based on the input image and corresponding commonsense knowledge. The debiasing methods based on causal graphs have gradually sparked much attention in the field of Visual Dialog (VD), yielding impressive achievements. However, existing studies focus on the coarse-grained deconfounding, which lacks a principled analysis of the bias. In this paper, we propose a fined-grained study of deconfounding on: (1) We define the confounder from two perspectives. The first is user preference (denoted as <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi></mrow></msub></math></span>), derived from human-annotated dialog history, which may introduce spurious correlations between questions and answers. The second is commonsense language bias (denoted as <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi></mrow></msub></math></span>), where certain words appear so frequently in the retrieved commonsense knowledge that the model tends to memorize these patterns, thereby establishing spurious correlations between the commonsense knowledge and the answers. (2) Given that the current question directly influences answer generation, we further decompose the confounders into <span><math><mrow><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi><mn>1</mn></mrow></msub><mo>,</mo><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi><mn>2</mn></mrow></msub></mrow></math></span> and <span><math><mrow><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi><mn>1</mn></mrow></msub><mo>,</mo><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi><mn>2</mn></mrow></msub></mrow></math></span>, based on their relevance to the current question. Specifically, <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi><mn>1</mn></mrow></msub></math></span> and <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi><mn>1</mn></mrow></msub></math></span> represent dialog history and high-frequency words that are highly correlated with the current question, while <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi><mn>2</mn></mrow></msub></math></span> and <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi><mn>2</mn></mrow></msub></math></span> are sampled from dialog history and words with low relevance to the current question. Through a comprehensive evaluation and comparison of all components, we demonstrate the necessity of jointly considering both <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi></mrow></msub></math></span> and <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi></mrow></msub></math></span>. Fine-grained deconfounding, particularly with respect to the current question, proves to be more effective. Ablation studies, quantitative results, and visualizations further confirm the effectiveness of the proposed method.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 36-47"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143098852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DPKnob: A visual analysis approach to risk-aware formulation of differential privacy schemes for data query scenarios DPKnob:针对数据查询场景,采用可视化分析方法制定具有风险意识的差异化隐私方案
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 Epub Date: 2024-09-11 DOI: 10.1016/j.visinf.2024.09.002
Shuangcheng Jiao , Jiang Cheng , Zhaosong Huang , Tong Li , Tiankai Xie , Wei Chen , Yuxin Ma , Xumeng Wang
Differential privacy is an essential approach for privacy preservation in data queries. However, users face a significant challenge in selecting an appropriate privacy scheme, as they struggle to balance the utility of query results with the preservation of diverse individual privacy. Customizing a privacy scheme becomes even more complex in dealing with queries that involve multiple data attributes. When adversaries attempt to breach privacy firewalls by conducting multiple regular data queries with various attribute values, data owners must arduously discern unpredictable disclosure risks and construct suitable privacy schemes. In this paper, we propose a visual analysis approach for formulating privacy schemes of differential privacy. Our approach supports the identification and simulation of potential privacy attacks in querying statistical results of multi-dimensional databases. We also developed a prototype system, called DPKnob, which integrates multiple coordinated views. DPKnob not only allows users to interactively assess and explore privacy exposure risks by browsing high-risk attacks, but also facilitates an iterative process for formulating and optimizing privacy schemes based on differential privacy. This iterative process allows users to compare different schemes, refine their expectations of privacy and utility, and ultimately establish a well-balanced privacy scheme. The effectiveness of this study is verified by a user study and two case studies with real-world datasets.
差异隐私是数据查询中保护隐私的重要方法。然而,用户在选择合适的隐私方案时面临着巨大的挑战,因为他们很难在查询结果的实用性和保护不同个人隐私之间取得平衡。在处理涉及多个数据属性的查询时,定制隐私方案变得更加复杂。当对手试图通过使用各种属性值进行多个常规数据查询来突破隐私防火墙时,数据所有者必须努力辨别不可预测的泄露风险,并构建合适的隐私方案。在本文中,我们提出了一种可视化分析方法,用于制定差分隐私方案。我们的方法支持在查询多维数据库统计结果时识别和模拟潜在的隐私攻击。我们还开发了一个名为 DPKnob 的原型系统,该系统集成了多个协调视图。DPKnob 不仅允许用户通过浏览高风险攻击来交互式地评估和探索隐私暴露风险,还促进了基于差异隐私制定和优化隐私方案的迭代过程。这种迭代过程允许用户比较不同的方案,完善他们对隐私和效用的预期,并最终建立一个平衡的隐私方案。这项研究的有效性通过一项用户研究和两项真实数据集案例研究得到了验证。
{"title":"DPKnob: A visual analysis approach to risk-aware formulation of differential privacy schemes for data query scenarios","authors":"Shuangcheng Jiao ,&nbsp;Jiang Cheng ,&nbsp;Zhaosong Huang ,&nbsp;Tong Li ,&nbsp;Tiankai Xie ,&nbsp;Wei Chen ,&nbsp;Yuxin Ma ,&nbsp;Xumeng Wang","doi":"10.1016/j.visinf.2024.09.002","DOIUrl":"10.1016/j.visinf.2024.09.002","url":null,"abstract":"<div><div>Differential privacy is an essential approach for privacy preservation in data queries. However, users face a significant challenge in selecting an appropriate privacy scheme, as they struggle to balance the utility of query results with the preservation of diverse individual privacy. Customizing a privacy scheme becomes even more complex in dealing with queries that involve multiple data attributes. When adversaries attempt to breach privacy firewalls by conducting multiple regular data queries with various attribute values, data owners must arduously discern unpredictable disclosure risks and construct suitable privacy schemes. In this paper, we propose a visual analysis approach for formulating privacy schemes of differential privacy. Our approach supports the identification and simulation of potential privacy attacks in querying statistical results of multi-dimensional databases. We also developed a prototype system, called DPKnob, which integrates multiple coordinated views. DPKnob not only allows users to interactively assess and explore privacy exposure risks by browsing high-risk attacks, but also facilitates an iterative process for formulating and optimizing privacy schemes based on differential privacy. This iterative process allows users to compare different schemes, refine their expectations of privacy and utility, and ultimately establish a well-balanced privacy scheme. The effectiveness of this study is verified by a user study and two case studies with real-world datasets.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 3","pages":"Pages 42-52"},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MILG: Realistic lip-sync video generation with audio-modulated image inpainting MILG:利用音频调制图像绘制生成逼真的唇音同步视频
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 Epub Date: 2024-09-10 DOI: 10.1016/j.visinf.2024.08.002
Han Bao , Xuhong Zhang , Qinying Wang , Kangming Liang , Zonghui Wang , Shouling Ji , Wenzhi Chen
Existing lip synchronization (lip-sync) methods generate accurately synchronized mouths and faces in a generated video. However, they still confront the problem of artifacts in regions of non-interest (RONI), e.g., background and other parts of a face, which decreases the overall visual quality. To solve these problems, we innovatively introduce diverse image inpainting to lip-sync generation. We propose Modulated Inpainting Lip-sync GAN (MILG), an audio-constraint inpainting network to predict synchronous mouths. MILG utilizes prior knowledge of RONI and audio sequences to predict lip shape instead of image generation, which can keep the RONI consistent. Specifically, we integrate modulated spatially probabilistic diversity normalization (MSPD Norm) in our inpainting network, which helps the network generate fine-grained diverse mouth movements guided by the continuous audio features. Furthermore, to lower the training overhead, we modify the contrastive loss in lip-sync to support small-batch-size and few-sample training. Extensive experiments demonstrate that our approach outperforms the existing state-of-the-art of image quality and authenticity while keeping lip-sync.
现有的唇部同步(lip-sync)方法能在生成的视频中准确同步嘴部和面部。然而,这些方法仍然面临着非感兴趣区域(RONI)的伪像问题,例如背景和脸部的其他部分,从而降低了整体视觉质量。为了解决这些问题,我们创新性地将多样化的图像绘制引入到唇音生成中。我们提出了调制内绘唇同步 GAN(MILG),这是一种音频约束内绘网络,用于预测同步口型。MILG 利用 RONI 和音频序列的先验知识来预测唇形,而不是生成图像,这样可以保持 RONI 的一致性。具体来说,我们将调制空间概率多样性归一化(MSPD Norm)集成到我们的内绘制网络中,这有助于网络在连续音频特征的引导下生成细粒度的多样化嘴部动作。此外,为了降低训练开销,我们修改了唇部同步中的对比度损失,以支持小批量和少样本训练。大量实验证明,我们的方法在保持唇语同步的同时,在图像质量和真实性方面都优于现有的最先进方法。
{"title":"MILG: Realistic lip-sync video generation with audio-modulated image inpainting","authors":"Han Bao ,&nbsp;Xuhong Zhang ,&nbsp;Qinying Wang ,&nbsp;Kangming Liang ,&nbsp;Zonghui Wang ,&nbsp;Shouling Ji ,&nbsp;Wenzhi Chen","doi":"10.1016/j.visinf.2024.08.002","DOIUrl":"10.1016/j.visinf.2024.08.002","url":null,"abstract":"<div><div>Existing lip synchronization (lip-sync) methods generate accurately synchronized mouths and faces in a generated video. However, they still confront the problem of artifacts in regions of non-interest (RONI), <em>e.g.</em>, background and other parts of a face, which decreases the overall visual quality. To solve these problems, we innovatively introduce diverse image inpainting to lip-sync generation. We propose Modulated Inpainting Lip-sync GAN (MILG), an audio-constraint inpainting network to predict synchronous mouths. MILG utilizes prior knowledge of RONI and audio sequences to predict lip shape instead of image generation, which can keep the RONI consistent. Specifically, we integrate modulated spatially probabilistic diversity normalization (MSPD Norm) in our inpainting network, which helps the network generate fine-grained diverse mouth movements guided by the continuous audio features. Furthermore, to lower the training overhead, we modify the contrastive loss in lip-sync to support small-batch-size and few-sample training. Extensive experiments demonstrate that our approach outperforms the existing state-of-the-art of image quality and authenticity while keeping lip-sync.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 3","pages":"Pages 71-81"},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AvatarWild: Fully controllable head avatars in the wild AvatarWild:完全可控的野生头像
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 Epub Date: 2024-09-12 DOI: 10.1016/j.visinf.2024.09.001
Shaoxu Meng , Tong Wu , Fang-Lue Zhang , Shu-Yu Chen , Yuewen Ma , Wenbo Hu , Lin Gao
Recent advancements in the field have resulted in significant progress in achieving realistic head reconstruction and manipulation using neural radiance fields (NeRF). Despite these advances, capturing intricate facial details remains a persistent challenge. Moreover, casually captured input, involving both head poses and camera movements, introduces additional difficulties to existing methods of head avatar reconstruction. To address the challenge posed by video data captured with camera motion, we propose a novel method, AvatarWild, for reconstructing head avatars from monocular videos taken by consumer devices. Notably, our approach decouples the camera pose and head pose, allowing reconstructed avatars to be visualized with different poses and expressions from novel viewpoints. To enhance the visual quality of the reconstructed facial avatar, we introduce a view-dependent detail enhancement module designed to augment local facial details without compromising viewpoint consistency. Our method demonstrates superior performance compared to existing approaches, as evidenced by reconstruction and animation results on both multi-view and single-view datasets. Remarkably, our approach stands out by exclusively relying on video data captured by portable devices, such as smartphones. This not only underscores the practicality of our method but also extends its applicability to real-world scenarios where accessibility and ease of data capture are crucial.
该领域的最新进展是利用神经辐射场(NeRF)实现逼真的头部重建和操作。尽管取得了这些进展,但捕捉复杂的面部细节仍是一项长期挑战。此外,随手捕捉的输入信息涉及头部姿势和摄像机运动,给现有的头像重建方法带来了更多困难。为了应对相机运动视频数据带来的挑战,我们提出了一种新方法 AvatarWild,用于从消费类设备拍摄的单目视频中重建头像。值得注意的是,我们的方法将摄像机姿势和头部姿势分离开来,允许从新的视角以不同的姿势和表情可视化重建的头像。为了提高重建后的面部头像的视觉质量,我们引入了一个视图相关细节增强模块,旨在增强局部面部细节而不影响视点一致性。我们的方法在多视角和单视角数据集上的重建和动画结果表明,与现有方法相比,我们的方法具有更优越的性能。值得注意的是,我们的方法完全依赖于智能手机等便携设备捕获的视频数据,因此脱颖而出。这不仅强调了我们方法的实用性,还将其适用范围扩展到了对数据获取的可及性和便捷性至关重要的现实世界场景中。
{"title":"AvatarWild: Fully controllable head avatars in the wild","authors":"Shaoxu Meng ,&nbsp;Tong Wu ,&nbsp;Fang-Lue Zhang ,&nbsp;Shu-Yu Chen ,&nbsp;Yuewen Ma ,&nbsp;Wenbo Hu ,&nbsp;Lin Gao","doi":"10.1016/j.visinf.2024.09.001","DOIUrl":"10.1016/j.visinf.2024.09.001","url":null,"abstract":"<div><div>Recent advancements in the field have resulted in significant progress in achieving realistic head reconstruction and manipulation using neural radiance fields (NeRF). Despite these advances, capturing intricate facial details remains a persistent challenge. Moreover, casually captured input, involving both head poses and camera movements, introduces additional difficulties to existing methods of head avatar reconstruction. To address the challenge posed by video data captured with camera motion, we propose a novel method, AvatarWild, for reconstructing head avatars from monocular videos taken by consumer devices. Notably, our approach decouples the camera pose and head pose, allowing reconstructed avatars to be visualized with different poses and expressions from novel viewpoints. To enhance the visual quality of the reconstructed facial avatar, we introduce a view-dependent detail enhancement module designed to augment local facial details without compromising viewpoint consistency. Our method demonstrates superior performance compared to existing approaches, as evidenced by reconstruction and animation results on both multi-view and single-view datasets. Remarkably, our approach stands out by exclusively relying on video data captured by portable devices, such as smartphones. This not only underscores the practicality of our method but also extends its applicability to real-world scenarios where accessibility and ease of data capture are crucial.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 3","pages":"Pages 96-106"},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual exploration of multi-dimensional data via rule-based sample embedding 通过基于规则的样本嵌入对多维数据进行可视化探索
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 Epub Date: 2024-09-26 DOI: 10.1016/j.visinf.2024.09.005
Tong Zhang, Jie Li, Chao Xu
We propose an approach to learning sample embedding for analyzing multi-dimensional datasets. The basic idea is to extract rules from the given dataset and learn the embedding for each sample based on the rules it satisfies. The approach can filter out pattern-irrelevant attributes, leading to significant visual structures of samples satisfying the same rules in the projection. In addition, analysts can understand a visual structure based on the rules that the involved samples satisfy, which improves the projection’s pattern interpretability. Our research involves two methods for achieving and applying the approach. First, we give a method to learn rule-based embedding for each sample. Second, we integrate the method into a system to achieve an analytical workflow. Cases on real-world dataset and quantitative experiment results show the usability and effectiveness of our approach.
我们提出了一种用于分析多维数据集的样本嵌入学习方法。其基本思想是从给定数据集中提取规则,并根据每个样本所满足的规则学习其嵌入。这种方法可以过滤掉与模式无关的属性,从而在投影中获得满足相同规则的样本的重要视觉结构。此外,分析人员可以根据相关样本满足的规则来理解视觉结构,从而提高投影的模式可解释性。我们的研究涉及实现和应用该方法的两种方法。首先,我们给出了一种为每个样本学习基于规则的嵌入的方法。其次,我们将该方法集成到一个系统中,以实现分析工作流程。真实世界数据集上的案例和定量实验结果表明了我们方法的可用性和有效性。
{"title":"Visual exploration of multi-dimensional data via rule-based sample embedding","authors":"Tong Zhang,&nbsp;Jie Li,&nbsp;Chao Xu","doi":"10.1016/j.visinf.2024.09.005","DOIUrl":"10.1016/j.visinf.2024.09.005","url":null,"abstract":"<div><div>We propose an approach to learning sample embedding for analyzing multi-dimensional datasets. The basic idea is to extract rules from the given dataset and learn the embedding for each sample based on the rules it satisfies. The approach can filter out pattern-irrelevant attributes, leading to significant visual structures of samples satisfying the same rules in the projection. In addition, analysts can understand a visual structure based on the rules that the involved samples satisfy, which improves the projection’s pattern interpretability. Our research involves two methods for achieving and applying the approach. First, we give a method to learn rule-based embedding for each sample. Second, we integrate the method into a system to achieve an analytical workflow. Cases on real-world dataset and quantitative experiment results show the usability and effectiveness of our approach.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 3","pages":"Pages 53-56"},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RenderKernel: High-level programming for real-time rendering systems RenderKernel:实时渲染系统的高级编程
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 Epub Date: 2024-09-16 DOI: 10.1016/j.visinf.2024.09.004
Jinyuan Yang, Soumyabrata Dev, Abraham G. Campbell
Real-time rendering applications leverage heterogeneous computing to optimize performance. However, software development across multiple devices presents challenges, including data layout inconsistencies, synchronization issues, resource management complexities, and architectural disparities. Additionally, the creation of such systems requires verbose and unsafe programming models. Recent developments in domain-specific and unified shading languages aim to mitigate these issues. Yet, current programming models primarily address data layout consistency, neglecting other persistent challenges.In this paper, we introduce RenderKernel, a programming model designed to simplify the development of real-time rendering systems. Recognizing the need for a high-level approach, RenderKernel addresses the specific challenges of real-time rendering, enabling development on heterogeneous systems as if they were homogeneous. This model allows for early detection and prevention of errors due to system heterogeneity at compile-time. Furthermore, RenderKernel enables the use of common programming patterns from homogeneous environments, freeing developers from the complexities of underlying heterogeneous systems. Developers can focus on coding unique application features, thereby enhancing productivity and reducing the cognitive load associated with real-time rendering system development.
实时渲染应用利用异构计算来优化性能。然而,跨多种设备的软件开发面临着各种挑战,包括数据布局不一致、同步问题、资源管理复杂性和架构差异。此外,创建此类系统还需要冗长且不安全的编程模型。针对特定领域的统一着色语言的最新发展旨在缓解这些问题。在本文中,我们介绍了 RenderKernel,这是一种旨在简化实时渲染系统开发的编程模型。RenderKernel 认识到高层次方法的必要性,解决了实时渲染的特殊挑战,使异构系统的开发如同同构系统。这种模式可以在编译时及早发现和防止由于系统异构造成的错误。此外,RenderKernel 还能使用同构环境中的通用编程模式,将开发人员从底层异构系统的复杂性中解放出来。开发人员可以专注于编码独特的应用功能,从而提高生产率,减少与实时渲染系统开发相关的认知负荷。
{"title":"RenderKernel: High-level programming for real-time rendering systems","authors":"Jinyuan Yang,&nbsp;Soumyabrata Dev,&nbsp;Abraham G. Campbell","doi":"10.1016/j.visinf.2024.09.004","DOIUrl":"10.1016/j.visinf.2024.09.004","url":null,"abstract":"<div><div>Real-time rendering applications leverage heterogeneous computing to optimize performance. However, software development across multiple devices presents challenges, including data layout inconsistencies, synchronization issues, resource management complexities, and architectural disparities. Additionally, the creation of such systems requires verbose and unsafe programming models. Recent developments in domain-specific and unified shading languages aim to mitigate these issues. Yet, current programming models primarily address data layout consistency, neglecting other persistent challenges.In this paper, we introduce RenderKernel, a programming model designed to simplify the development of real-time rendering systems. Recognizing the need for a high-level approach, RenderKernel addresses the specific challenges of real-time rendering, enabling development on heterogeneous systems as if they were homogeneous. This model allows for early detection and prevention of errors due to system heterogeneity at compile-time. Furthermore, RenderKernel enables the use of common programming patterns from homogeneous environments, freeing developers from the complexities of underlying heterogeneous systems. Developers can focus on coding unique application features, thereby enhancing productivity and reducing the cognitive load associated with real-time rendering system development.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 3","pages":"Pages 82-95"},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RelicCARD: Enhancing cultural relics exploration through semantics-based augmented reality tangible interaction design RelicCARD:通过基于语义的增强现实有形交互设计加强文物探索
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 Epub Date: 2024-07-25 DOI: 10.1016/j.visinf.2024.06.003
Tao Yu , Shaoxuan Lai , Wenjin Zhang , Jun Cui , Jun Tao
Cultural relics visualization brings digital archives of relics to broader audiences in many applications, such as education, historical research, and virtual museums. However, previous research mainly focused on modeling and rendering the relics. While enhancing accessibility, these techniques still provide limited ability to improve user engagement. In this paper, we introduce RelicCARD, a semantics-based augmented reality (AR) tangible interaction design for exploring cultural relics. Our design uses an easily available tangible interface to encourage the users to interact with a large collection of relics. The tangible interface allows users to explore, select, and arrange relics to form customized scenes. To guide the design of the interface, we formalize a design space by connecting the semantics in relics, the tangible interaction patterns, and the exploration tasks. We realize the design space as a tangible interactive prototype and examine its feasibility and effectiveness using multiple case studies and an expert evaluation. Finally, we discuss the findings in the evaluation and future directions to improve the design and implementation of the interactive design space.
文物可视化在教育、历史研究和虚拟博物馆等许多应用领域为更广泛的受众带来了文物数字档案。然而,以往的研究主要集中在文物建模和渲染方面。这些技术虽然提高了可访问性,但在提高用户参与度方面仍然能力有限。在本文中,我们介绍了一种基于语义的增强现实(AR)有形交互设计--RelicCARD,用于探索文物。我们的设计使用易于获取的有形界面来鼓励用户与大量文物进行互动。有形界面允许用户探索、选择和排列文物,以形成自定义场景。为了指导界面的设计,我们将文物语义、有形交互模式和探索任务联系起来,形成了一个形式化的设计空间。我们以有形交互原型的形式实现了设计空间,并通过多个案例研究和专家评估检验了其可行性和有效性。最后,我们讨论了评估结果和未来方向,以改进交互设计空间的设计和实施。
{"title":"RelicCARD: Enhancing cultural relics exploration through semantics-based augmented reality tangible interaction design","authors":"Tao Yu ,&nbsp;Shaoxuan Lai ,&nbsp;Wenjin Zhang ,&nbsp;Jun Cui ,&nbsp;Jun Tao","doi":"10.1016/j.visinf.2024.06.003","DOIUrl":"10.1016/j.visinf.2024.06.003","url":null,"abstract":"<div><div>Cultural relics visualization brings digital archives of relics to broader audiences in many applications, such as education, historical research, and virtual museums. However, previous research mainly focused on modeling and rendering the relics. While enhancing accessibility, these techniques still provide limited ability to improve user engagement. In this paper, we introduce RelicCARD, a semantics-based augmented reality (AR) tangible interaction design for exploring cultural relics. Our design uses an easily available tangible interface to encourage the users to interact with a large collection of relics. The tangible interface allows users to explore, select, and arrange relics to form customized scenes. To guide the design of the interface, we formalize a design space by connecting the semantics in relics, the tangible interaction patterns, and the exploration tasks. We realize the design space as a tangible interactive prototype and examine its feasibility and effectiveness using multiple case studies and an expert evaluation. Finally, we discuss the findings in the evaluation and future directions to improve the design and implementation of the interactive design space.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 3","pages":"Pages 32-41"},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141839325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VisAhoi: Towards a library to generate and integrate visualization onboarding using high-level visualization grammars VisAhoi:使用高级可视化语法生成和集成可视化入门库
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 Epub Date: 2024-06-27 DOI: 10.1016/j.visinf.2024.06.001
Christina Stoiber , Daniela Moitzi , Holger Stitz , Florian Grassinger , Anto Silviya Geo Prakash , Dominic Girardi , Marc Streit , Wolfgang Aigner

Visualization onboarding supports users in reading, interpreting, and extracting information from visual data representations. General-purpose onboarding tools and libraries are applicable for explaining a wide range of graphical user interfaces but cannot handle specific visualization requirements. This paper describes a first step towards developing an onboarding library called VisAhoi, which is easy to integrate, extend, semi-automate, reuse, and customize. VisAhoi supports the creation of onboarding elements for different visualization types and datasets. We demonstrate how to extract and describe onboarding instructions using three well-known high-level descriptive visualization grammars — Vega-Lite, Plotly.js, and ECharts. We show the applicability of our library by performing two usage scenarios that describe the integration of VisAhoi into a VA tool for the analysis of high-throughput screening (HTS) data and, second, into a Flourish template to provide an authoring tool for data journalists for a treemap visualization. We provide a supplementary website (https://datavisyn.github.io/visAhoi/) that demonstrates the applicability of VisAhoi to various visualizations, including a bar chart, a horizon graph, a change matrix/heatmap, a scatterplot, and a treemap visualization.

可视化上机支持用户从可视化数据表示中阅读、解释和提取信息。通用上机工具和库适用于解释各种图形用户界面,但无法处理特定的可视化需求。本文介绍了开发名为 VisAhoi 的上机库的第一步,该库易于集成、扩展、半自动化、重用和定制。VisAhoi 支持为不同的可视化类型和数据集创建上机元素。我们演示了如何使用 Vega-Lite、Plotly.js 和 ECharts 这三种著名的高级描述性可视化语法提取和描述上机指令。我们通过两个使用场景展示了我们库的适用性,一个场景是将 VisAhoi 集成到用于分析高通量筛选(HTS)数据的 VA 工具中,另一个场景是将 VisAhoi 集成到 Flourish 模板中,为数据记者提供树状图可视化的创作工具。我们提供了一个补充网站 (https://datavisyn.github.io/visAhoi/),该网站演示了 VisAhoi 对各种可视化的适用性,包括条形图、地平线图、变化矩阵/热图、散点图和树状地图可视化。
{"title":"VisAhoi: Towards a library to generate and integrate visualization onboarding using high-level visualization grammars","authors":"Christina Stoiber ,&nbsp;Daniela Moitzi ,&nbsp;Holger Stitz ,&nbsp;Florian Grassinger ,&nbsp;Anto Silviya Geo Prakash ,&nbsp;Dominic Girardi ,&nbsp;Marc Streit ,&nbsp;Wolfgang Aigner","doi":"10.1016/j.visinf.2024.06.001","DOIUrl":"10.1016/j.visinf.2024.06.001","url":null,"abstract":"<div><p>Visualization onboarding supports users in reading, interpreting, and extracting information from visual data representations. General-purpose onboarding tools and libraries are applicable for explaining a wide range of graphical user interfaces but cannot handle specific visualization requirements. This paper describes a first step towards developing an onboarding library called VisAhoi, which is easy to <em>integrate, extend, semi-automate, reuse, and customize</em>. VisAhoi supports the creation of onboarding elements for different visualization types and datasets. We demonstrate how to extract and describe onboarding instructions using three well-known high-level descriptive visualization grammars — Vega-Lite, Plotly.js, and ECharts. We show the applicability of our library by performing two usage scenarios that describe the integration of VisAhoi into a VA tool for the analysis of high-throughput screening (HTS) data and, second, into a Flourish template to provide an authoring tool for data journalists for a treemap visualization. We provide a supplementary website (<span><span>https://datavisyn.github.io/visAhoi/</span><svg><path></path></svg></span>) that demonstrates the applicability of VisAhoi to various visualizations, including a bar chart, a horizon graph, a change matrix/heatmap, a scatterplot, and a treemap visualization.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 3","pages":"Pages 1-17"},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X24000214/pdfft?md5=b500608cf3b6d6a02fdc48334024bff3&pid=1-s2.0-S2468502X24000214-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141954338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Demers cartogram with rivers 带有河流的 Demers 地图
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 Epub Date: 2024-09-13 DOI: 10.1016/j.visinf.2024.09.003
Qiru Wang, Kai Xu, Robert S. Laramee
Cartograms serve as representations of geographical and abstract data, employing a value-by-area mapping technique. As a variant of the Dorling cartogram, the Demers cartogram utilizes squares instead of circles to represent regions. This alternative approach allows for a more intuitive comparison of regions, utilizing screen space more efficiently. However, a drawback of the Dorling cartogram and its variants lies in the potential displacement of regions from their original positions, ultimately compromising legibility, readability, and accuracy. To tackle this limitation, we propose a novel hybrid cartogram layout algorithm that incorporates topological elements, such as rivers, into Demers cartograms. The presence of rivers significantly impacts both the layout and visual appearance of the cartograms. Through a user study conducted on an Electronic Health Records (EHR) dataset, we evaluate the efficacy of the proposed hybrid layout algorithm. The obtained results illustrate that this approach successfully retains key aspects of the original cartogram while enhancing legibility, readability, and overall accuracy.
制图是地理和抽象数据的表示方法,采用的是逐值制图技术。作为多林制图的一种变体,戴莫斯制图使用方形而不是圆形来表示区域。这种替代方法可以更直观地比较区域,更有效地利用屏幕空间。然而,多林制图及其变体的一个缺点是可能会使区域偏离其原始位置,最终影响可读性、可读性和准确性。为了解决这一局限性,我们提出了一种新颖的混合制图布局算法,将河流等拓扑元素纳入德默斯制图中。河流的存在极大地影响了制图的布局和视觉效果。通过对电子健康记录(EHR)数据集进行用户研究,我们评估了所提出的混合布局算法的功效。研究结果表明,这种方法在提高可读性、可读性和整体准确性的同时,还成功保留了原始制图的关键部分。
{"title":"Demers cartogram with rivers","authors":"Qiru Wang,&nbsp;Kai Xu,&nbsp;Robert S. Laramee","doi":"10.1016/j.visinf.2024.09.003","DOIUrl":"10.1016/j.visinf.2024.09.003","url":null,"abstract":"<div><div>Cartograms serve as representations of geographical and abstract data, employing a value-by-area mapping technique. As a variant of the Dorling cartogram, the Demers cartogram utilizes squares instead of circles to represent regions. This alternative approach allows for a more intuitive comparison of regions, utilizing screen space more efficiently. However, a drawback of the Dorling cartogram and its variants lies in the potential displacement of regions from their original positions, ultimately compromising legibility, readability, and accuracy. To tackle this limitation, we propose a novel hybrid cartogram layout algorithm that incorporates topological elements, such as rivers, into Demers cartograms. The presence of rivers significantly impacts both the layout and visual appearance of the cartograms. Through a user study conducted on an Electronic Health Records (EHR) dataset, we evaluate the efficacy of the proposed hybrid layout algorithm. The obtained results illustrate that this approach successfully retains key aspects of the original cartogram while enhancing legibility, readability, and overall accuracy.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 3","pages":"Pages 57-70"},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JobViz: Skill-driven visual exploration of job advertisements JobViz:以技能为导向的招聘广告可视化探索
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 Epub Date: 2024-07-19 DOI: 10.1016/j.visinf.2024.07.001
Ran Wang , Qianhe Chen , Yong Wang , Lewei Xiong , Boyang Shen

Online job advertisements on various job portals or websites have become the most popular way for people to find potential career opportunities nowadays. However, the majority of these job sites are limited to offering fundamental filters such as job titles, keywords, and compensation ranges. This often poses a challenge for job seekers in efficiently identifying relevant job advertisements that align with their unique skill sets amidst a vast sea of listings. Thus, we propose well-coordinated visualizations to provide job seekers with three levels of details of job information: a skill-job overview visualizes skill sets, employment posts as well as relationships between them with a hierarchical visualization design; a post exploration view leverages an augmented radar-chart glyph to represent job posts and further facilitates users’ swift comprehension of the pertinent skills necessitated by respective positions; a post detail view lists the specifics of selected job posts for profound analysis and comparison. By using a real-world recruitment advertisement dataset collected from 51Job, one of the largest job websites in China, we conducted two case studies and user interviews to evaluate JobViz. The results demonstrated the usefulness and effectiveness of our approach.

各种招聘门户网站或网站上的在线招聘广告已成为时下人们寻找潜在职业机会的最流行方式。然而,这些招聘网站大多仅限于提供基本的筛选条件,如职位名称、关键字和薪酬范围。这往往会给求职者带来挑战,使他们难以在浩如烟海的招聘广告中有效识别与其独特技能相符的相关招聘广告。因此,我们提出了协调良好的可视化方法,为求职者提供三个层次的详细职位信息:技能-职位概览采用分层可视化设计,将技能组合、招聘职位以及它们之间的关系可视化;职位探索视图利用增强的雷达图字形来表示招聘职位,进一步帮助用户快速理解各个职位所需的相关技能;职位详情视图列出了所选招聘职位的具体内容,以便进行深入分析和比较。通过使用从中国最大的招聘网站之一 51Job 收集的真实招聘广告数据集,我们进行了两项案例研究和用户访谈,以评估 JobViz。结果证明了我们的方法的实用性和有效性。
{"title":"JobViz: Skill-driven visual exploration of job advertisements","authors":"Ran Wang ,&nbsp;Qianhe Chen ,&nbsp;Yong Wang ,&nbsp;Lewei Xiong ,&nbsp;Boyang Shen","doi":"10.1016/j.visinf.2024.07.001","DOIUrl":"10.1016/j.visinf.2024.07.001","url":null,"abstract":"<div><p>Online job advertisements on various job portals or websites have become the most popular way for people to find potential career opportunities nowadays. However, the majority of these job sites are limited to offering fundamental filters such as job titles, keywords, and compensation ranges. This often poses a challenge for job seekers in efficiently identifying relevant job advertisements that align with their unique skill sets amidst a vast sea of listings. Thus, we propose well-coordinated visualizations to provide job seekers with three levels of details of job information: a skill-job overview visualizes skill sets, employment posts as well as relationships between them with a hierarchical visualization design; a post exploration view leverages an augmented radar-chart glyph to represent job posts and further facilitates users’ swift comprehension of the pertinent skills necessitated by respective positions; a post detail view lists the specifics of selected job posts for profound analysis and comparison. By using a real-world recruitment advertisement dataset collected from 51Job, one of the largest job websites in China, we conducted two case studies and user interviews to evaluate <em>JobViz</em>. The results demonstrated the usefulness and effectiveness of our approach.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 3","pages":"Pages 18-28"},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X24000391/pdfft?md5=62d1e06a4ba3529c504c7ac24e65e000&pid=1-s2.0-S2468502X24000391-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141843719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Visual Informatics
全部 Geobiology Appl. Clay Sci. Geochim. Cosmochim. Acta J. Hydrol. Org. Geochem. Carbon Balance Manage. Contrib. Mineral. Petrol. Int. J. Biometeorol. IZV-PHYS SOLID EART+ J. Atmos. Chem. Acta Oceanolog. Sin. Acta Geophys. ACTA GEOL POL ACTA PETROL SIN ACTA GEOL SIN-ENGL AAPG Bull. Acta Geochimica Adv. Atmos. Sci. Adv. Meteorol. Am. J. Phys. Anthropol. Am. J. Sci. Am. Mineral. Annu. Rev. Earth Planet. Sci. Appl. Geochem. Aquat. Geochem. Ann. Glaciol. Archaeol. Anthropol. Sci. ARCHAEOMETRY ARCT ANTARCT ALP RES Asia-Pac. J. Atmos. Sci. ATMOSPHERE-BASEL Atmos. Res. Aust. J. Earth Sci. Atmos. Chem. Phys. Atmos. Meas. Tech. Basin Res. Big Earth Data BIOGEOSCIENCES Geostand. Geoanal. Res. GEOLOGY Geosci. J. Geochem. J. Geochem. Trans. Geosci. Front. Geol. Ore Deposits Global Biogeochem. Cycles Gondwana Res. Geochem. Int. Geol. J. Geophys. Prospect. Geosci. Model Dev. GEOL BELG GROUNDWATER Hydrogeol. J. Hydrol. Earth Syst. Sci. Hydrol. Processes Int. J. Climatol. Int. J. Earth Sci. Int. Geol. Rev. Int. J. Disaster Risk Reduct. Int. J. Geomech. Int. J. Geog. Inf. Sci. Isl. Arc J. Afr. Earth. Sci. J. Adv. Model. Earth Syst. J APPL METEOROL CLIM J. Atmos. Oceanic Technol. J. Atmos. Sol. Terr. Phys. J. Clim. J. Earth Sci. J. Earth Syst. Sci. J. Environ. Eng. Geophys. J. Geog. Sci. Mineral. Mag. Miner. Deposita Mon. Weather Rev. Nat. Hazards Earth Syst. Sci. Nat. Clim. Change Nat. Geosci. Ocean Dyn. Ocean and Coastal Research npj Clim. Atmos. Sci. Ocean Modell. Ocean Sci. Ore Geol. Rev. OCEAN SCI J Paleontol. J. PALAEOGEOGR PALAEOCL PERIOD MINERAL PETROLOGY+ Phys. Chem. Miner. Polar Sci. Prog. Oceanogr. Quat. Sci. Rev. Q. J. Eng. Geol. Hydrogeol. RADIOCARBON Pure Appl. Geophys. Resour. Geol. Rev. Geophys. Sediment. Geol.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1