首页 > 最新文献

Visual Informatics最新文献

英文 中文
Generative model-assisted sample selection for interest-driven progressive visual analytics 兴趣驱动的渐进式视觉分析生成模型辅助样本选择
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 DOI: 10.1016/j.visinf.2024.10.004
Jie Liu, Jie Li, Jielong Kuang
We propose interest-driven progressive visual analytics. The core idea is to filter samples with features of interest to analysts from the given dataset for analysis. The approach relies on a generative model (GM) trained using the given dataset as the training set. The GM characteristics make it convenient to find ideal generated samples from its latent space. Then, we filter the original samples similar to the ideal generated ones to explore patterns. Our research involves two methods for achieving and applying the idea. First, we give a method to explore ideal samples from a GM’s latent space. Second, we integrate the method into a system to form an embedding-based analytical workflow. Patterns found on open datasets in case studies, results of quantitative experiments, and positive feedback from experts illustrate the general usability and effectiveness of the approach.
我们提出兴趣驱动的渐进式视觉分析。其核心思想是从给定的数据集中过滤分析人员感兴趣的特征样本进行分析。该方法依赖于使用给定数据集作为训练集训练的生成模型(GM)。GM的特性使其能够方便地从潜在空间中寻找理想的生成样本。然后,我们对与理想生成的样本相似的原始样本进行过滤,以探索模式。我们的研究涉及实现和应用这个想法的两种方法。首先,我们给出了一种从GM的潜在空间中挖掘理想样本的方法。其次,我们将该方法集成到一个系统中,形成一个基于嵌入的分析工作流。案例研究中在开放数据集中发现的模式、定量实验的结果以及专家的积极反馈说明了该方法的一般可用性和有效性。
{"title":"Generative model-assisted sample selection for interest-driven progressive visual analytics","authors":"Jie Liu,&nbsp;Jie Li,&nbsp;Jielong Kuang","doi":"10.1016/j.visinf.2024.10.004","DOIUrl":"10.1016/j.visinf.2024.10.004","url":null,"abstract":"<div><div>We propose interest-driven progressive visual analytics. The core idea is to filter samples with features of interest to analysts from the given dataset for analysis. The approach relies on a generative model (GM) trained using the given dataset as the training set. The GM characteristics make it convenient to find ideal generated samples from its latent space. Then, we filter the original samples similar to the ideal generated ones to explore patterns. Our research involves two methods for achieving and applying the idea. First, we give a method to explore ideal samples from a GM’s latent space. Second, we integrate the method into a system to form an embedding-based analytical workflow. Patterns found on open datasets in case studies, results of quantitative experiments, and positive feedback from experts illustrate the general usability and effectiveness of the approach.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 97-108"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChemNav: An interactive visual tool to navigate in the latent space for chemical molecules discovery ChemNav:一个交互式可视化工具,用于在化学分子发现的潜在空间中导航
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 DOI: 10.1016/j.visinf.2024.10.002
Yang Zhang, Jie Li, Xu Chao
In recent years, AI-driven drug development has emerged as a prominent research topic in computer chemistry. A key focus is the application of generative models for molecule synthesis, which create extensive virtual libraries of chemical molecules based on latent spaces. However, locating molecules with desirable properties within the vast latent spaces remains a significant challenge. Large regions of invalid samples in the latent space, called “dead zones”, can impede the exploration efficiency. The process is always time-consuming and repetitive. Therefore, we aim to propose a visualization system to help experts identify potential molecules with desirable properties as they wander in the latent space. Specifically, we conducted a literature survey about the application of generative networks in drug synthesis to summarize the tasks and followed this with expert interviews to determine their requirements. Based on the above requirements, we introduce ChemNav, an interactive visual tool for navigating latent space for desirable molecules search. ChemNav incorporates a heuristic latent space interpolation path search algorithm to enhance the efficiency of valid molecule generation, and a similar sample search algorithm to accelerate the discovery of similar molecules. Evaluations of ChemNav through two case studies, a user study, and experiments demonstrated its effectiveness in inspiring researchers to explore the latent space for chemical molecule discovery.
近年来,人工智能驱动的药物开发已成为计算机化学领域的一个突出研究课题。一个关键的焦点是分子合成生成模型的应用,它基于潜在空间创建了广泛的化学分子虚拟库。然而,在巨大的潜在空间中定位具有理想特性的分子仍然是一个重大挑战。潜在空间中存在大面积无效样本,称为“死区”,会影响探测效率。这个过程总是费时且重复的。因此,我们的目标是提出一个可视化系统,以帮助专家识别潜在分子在潜在空间中漫游时具有理想特性。具体来说,我们对生成网络在药物合成中的应用进行了文献调查,总结了任务,然后与专家进行了访谈,以确定他们的要求。基于上述需求,我们介绍了ChemNav,一个交互式可视化工具,用于导航潜在空间,以进行所需分子的搜索。ChemNav结合了一种启发式潜在空间插值路径搜索算法来提高有效分子生成的效率,以及一种相似样本搜索算法来加速相似分子的发现。通过两个案例研究、一个用户研究和实验对ChemNav进行了评估,证明了它在激励研究人员探索化学分子发现的潜在空间方面的有效性。
{"title":"ChemNav: An interactive visual tool to navigate in the latent space for chemical molecules discovery","authors":"Yang Zhang,&nbsp;Jie Li,&nbsp;Xu Chao","doi":"10.1016/j.visinf.2024.10.002","DOIUrl":"10.1016/j.visinf.2024.10.002","url":null,"abstract":"<div><div>In recent years, AI-driven drug development has emerged as a prominent research topic in computer chemistry. A key focus is the application of generative models for molecule synthesis, which create extensive virtual libraries of chemical molecules based on latent spaces. However, locating molecules with desirable properties within the vast latent spaces remains a significant challenge. Large regions of invalid samples in the latent space, called “dead zones”, can impede the exploration efficiency. The process is always time-consuming and repetitive. Therefore, we aim to propose a visualization system to help experts identify potential molecules with desirable properties as they wander in the latent space. Specifically, we conducted a literature survey about the application of generative networks in drug synthesis to summarize the tasks and followed this with expert interviews to determine their requirements. Based on the above requirements, we introduce ChemNav, an interactive visual tool for navigating latent space for desirable molecules search. ChemNav incorporates a heuristic latent space interpolation path search algorithm to enhance the efficiency of valid molecule generation, and a similar sample search algorithm to accelerate the discovery of similar molecules. Evaluations of ChemNav through two case studies, a user study, and experiments demonstrated its effectiveness in inspiring researchers to explore the latent space for chemical molecule discovery.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 60-70"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143150182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Glyph design for communication initiation in real-time human-automation collaboration 实时人机协作中通信启动的字形设计
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 DOI: 10.1016/j.visinf.2024.09.006
Magnus Nylin , Jonas Lundberg , Magnus Bång , Kostiantyn Kucher
Initiating communication and conveying critical information to the human operator is a key problem in human-automation collaboration. This problem is particularly pronounced in time-constrained safety critical domains such as in Air Traffic Management. A visual representation should aid operators understanding why the system initiates the communication, when the operator must act, and the consequences of not responding to the cue. Data glyphs can be used to present multidimensional data, including temporal data in a compact format to facilitate this type of communication. In this paper, we propose a glyph design for communication initialization for highly automated systems in Air Traffic Management, Vessel Traffic Service, and Train Traffic Management. The design was assessed by experts in these domains in three workshop sessions. The results showed that the number of glyphs to be presented simultaneously and the type of situation were domain-specific glyph design aspects that needed to be adjusted for each work domain. The results also showed that the core of the glyph design could be reused between domains, and that the operators could successfully interpret the temporal data representations. We discuss similarities and differences in the applicability of the glyph design between the different domains, and finally, we provide some suggestions for future work based on the results from this study.
发起通信并向操作人员传递关键信息是人机协作的关键问题。这个问题在时间限制的安全关键领域尤其明显,例如空中交通管理。可视化表示应该帮助操作员理解系统启动通信的原因,操作员必须采取行动的时间,以及不响应提示的后果。数据符号可用于表示多维数据,包括紧凑格式的时态数据,以促进这种类型的通信。在本文中,我们提出了一种用于空中交通管理、船舶交通服务和火车交通管理中高度自动化系统的通信初始化的字形设计。该设计由这些领域的专家在三个研讨会上进行评估。结果表明,同时呈现的符号数量和情况类型是需要针对每个工作领域进行调整的特定领域的符号设计方面。结果还表明,符号设计的核心可以在域之间重用,操作符可以成功地解释时间数据表示。在此基础上,讨论了不同领域中字形设计适用性的异同,并对今后的工作提出了建议。
{"title":"Glyph design for communication initiation in real-time human-automation collaboration","authors":"Magnus Nylin ,&nbsp;Jonas Lundberg ,&nbsp;Magnus Bång ,&nbsp;Kostiantyn Kucher","doi":"10.1016/j.visinf.2024.09.006","DOIUrl":"10.1016/j.visinf.2024.09.006","url":null,"abstract":"<div><div>Initiating communication and conveying critical information to the human operator is a key problem in human-automation collaboration. This problem is particularly pronounced in time-constrained safety critical domains such as in Air Traffic Management. A visual representation should aid operators understanding <em>why</em> the system initiates the communication, <em>when</em> the operator must act, and the <em>consequences of not responding</em> to the cue. Data <em>glyphs</em> can be used to present multidimensional data, including temporal data in a compact format to facilitate this type of communication. In this paper, we propose a glyph design for communication initialization for highly automated systems in Air Traffic Management, Vessel Traffic Service, and Train Traffic Management. The design was assessed by experts in these domains in three workshop sessions. The results showed that the number of glyphs to be presented simultaneously and the type of situation were domain-specific glyph design aspects that needed to be adjusted for each work domain. The results also showed that the core of the glyph design could be reused between domains, and that the operators could successfully interpret the temporal data representations. We discuss similarities and differences in the applicability of the glyph design between the different domains, and finally, we provide some suggestions for future work based on the results from this study.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 23-35"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143098851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ATVis: Understanding and diagnosing adversarial training processes through visual analytics ATVis:通过视觉分析来理解和诊断对抗性训练过程
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 DOI: 10.1016/j.visinf.2024.10.003
Fang Zhu , Xufei Zhu , Xumeng Wang , Yuxin Ma , Jieqiong Zhao
Adversarial training has emerged as a major strategy against adversarial perturbations in deep neural networks, which mitigates the issue of exploiting model vulnerabilities to generate incorrect predictions. Despite enhancing robustness, adversarial training often results in a trade-off with standard accuracy on normal data, a phenomenon that remains a contentious issue. In addition, the opaque nature of deep neural network models renders it more difficult to inspect and diagnose how adversarial training processes evolve. This paper introduces ATVis, a visual analytics framework for examining and diagnosing adversarial training processes. Through multi-level visualization design, ATVis enables the examination of model robustness from various granularity, facilitating a detailed understanding of the dynamics in the training epochs. The framework reveals the complex relationship between adversarial robustness and standard accuracy, which further offers insights into the mechanisms that drive the trade-offs observed in adversarial training. The effectiveness of the framework is demonstrated through case studies.
对抗性训练已经成为深度神经网络对抗对抗性扰动的主要策略,它减轻了利用模型漏洞生成错误预测的问题。尽管增强了鲁棒性,但对抗性训练通常会导致与正常数据的标准准确性之间的权衡,这一现象仍然是一个有争议的问题。此外,深度神经网络模型的不透明性使得检查和诊断对抗性训练过程的演变变得更加困难。本文介绍了ATVis,一种用于检查和诊断对抗性训练过程的可视化分析框架。通过多级可视化设计,ATVis能够从不同粒度检查模型鲁棒性,促进对训练时期动力学的详细理解。该框架揭示了对抗鲁棒性和标准准确性之间的复杂关系,这进一步提供了对对抗训练中观察到的驱动权衡的机制的见解。通过案例研究证明了该框架的有效性。
{"title":"ATVis: Understanding and diagnosing adversarial training processes through visual analytics","authors":"Fang Zhu ,&nbsp;Xufei Zhu ,&nbsp;Xumeng Wang ,&nbsp;Yuxin Ma ,&nbsp;Jieqiong Zhao","doi":"10.1016/j.visinf.2024.10.003","DOIUrl":"10.1016/j.visinf.2024.10.003","url":null,"abstract":"<div><div>Adversarial training has emerged as a major strategy against adversarial perturbations in deep neural networks, which mitigates the issue of exploiting model vulnerabilities to generate incorrect predictions. Despite enhancing robustness, adversarial training often results in a trade-off with standard accuracy on normal data, a phenomenon that remains a contentious issue. In addition, the opaque nature of deep neural network models renders it more difficult to inspect and diagnose how adversarial training processes evolve. This paper introduces ATVis, a visual analytics framework for examining and diagnosing adversarial training processes. Through multi-level visualization design, ATVis enables the examination of model robustness from various granularity, facilitating a detailed understanding of the dynamics in the training epochs. The framework reveals the complex relationship between adversarial robustness and standard accuracy, which further offers insights into the mechanisms that drive the trade-offs observed in adversarial training. The effectiveness of the framework is demonstrated through case studies.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 71-84"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143098849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Incidental visualizations: How complexity factors influence task performance 附带的可视化:复杂性因素如何影响任务性能
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 DOI: 10.1016/j.visinf.2024.10.005
João Moreira , Daniel Mendes , Daniel Gonçalves
Incidental visualizations convey information to a person during an ongoing primary task, without the person consciously searching for or requesting that information. They differ from glanceable visualizations by not being people’s main focus, and from ambient visualizations by not being embedded in the environment. Instead, they are presented as secondary information that can be observed without a person losing focus on their current task. However, despite extensive research on glanceable and ambient visualizations, the topic of incidental visualizations is yet a novel topic in current research. To bridge this gap, we conducted an empirical user study presenting participants with an incidental visualization while performing a primary task. We aimed to understand how complexity contributory factors — task complexity, output complexity, and pressure — affected primary task performance and incidental visualization accuracy. Our findings showed that incidental visualizations effectively conveyed information without disrupting the primary task, but working memory limitations should be considered. Additionally, output and pressure significantly influenced the primary task’s results. In conclusion, our study provides insights into the perception accuracy and performance impact of incidental visualizations in relation to complexity factors.
附带的可视化在一个人正在进行的主要任务中传递信息,而没有人有意识地搜索或请求这些信息。它们不同于可浏览的可视化,因为它们不是人们的主要焦点,也不同于环境可视化,因为它们没有嵌入到环境中。相反,它们被呈现为次要信息,可以被观察到,而不会让一个人失去对当前任务的注意力。然而,尽管对可浏览可视化和环境可视化进行了广泛的研究,但附带可视化的主题在当前的研究中仍然是一个新的主题。为了弥补这一差距,我们进行了一项经验用户研究,向参与者展示了在执行主要任务时附带的可视化。我们的目的是了解复杂性的促成因素-任务复杂性,输出复杂性和压力-如何影响主要任务的性能和附带的可视化精度。我们的研究结果表明,偶然的视觉化可以有效地传达信息,而不会干扰主要任务,但应考虑工作记忆的限制。此外,输出和压力显著影响了主要任务的结果。总之,我们的研究提供了与复杂性因素有关的偶然可视化的感知准确性和性能影响的见解。
{"title":"Incidental visualizations: How complexity factors influence task performance","authors":"João Moreira ,&nbsp;Daniel Mendes ,&nbsp;Daniel Gonçalves","doi":"10.1016/j.visinf.2024.10.005","DOIUrl":"10.1016/j.visinf.2024.10.005","url":null,"abstract":"<div><div>Incidental visualizations convey information to a person during an ongoing primary task, without the person consciously searching for or requesting that information. They differ from glanceable visualizations by not being people’s main focus, and from ambient visualizations by not being embedded in the environment. Instead, they are presented as secondary information that can be observed without a person losing focus on their current task. However, despite extensive research on glanceable and ambient visualizations, the topic of incidental visualizations is yet a novel topic in current research. To bridge this gap, we conducted an empirical user study presenting participants with an incidental visualization while performing a primary task. We aimed to understand how complexity contributory factors — task complexity, output complexity, and pressure — affected primary task performance and incidental visualization accuracy. Our findings showed that incidental visualizations effectively conveyed information without disrupting the primary task, but working memory limitations should be considered. Additionally, output and pressure significantly influenced the primary task’s results. In conclusion, our study provides insights into the perception accuracy and performance impact of incidental visualizations in relation to complexity factors.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 85-96"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143098850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fine-grained deconfounding study for knowledge-based visual dialog 基于知识的视觉对话的细粒度解构研究
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-12-01 DOI: 10.1016/j.visinf.2024.09.007
An-An Liu , Quanhan Wu , Chenxi Huang , Chao Xue , Xianzhu Liu , Ning Xu
Knowledge-based Visual Dialog is a challenging vision-language task, where an agent engages in dialog to answer questions with humans based on the input image and corresponding commonsense knowledge. The debiasing methods based on causal graphs have gradually sparked much attention in the field of Visual Dialog (VD), yielding impressive achievements. However, existing studies focus on the coarse-grained deconfounding, which lacks a principled analysis of the bias. In this paper, we propose a fined-grained study of deconfounding on: (1) We define the confounder from two perspectives. The first is user preference (denoted as Uh), derived from human-annotated dialog history, which may introduce spurious correlations between questions and answers. The second is commonsense language bias (denoted as Uc), where certain words appear so frequently in the retrieved commonsense knowledge that the model tends to memorize these patterns, thereby establishing spurious correlations between the commonsense knowledge and the answers. (2) Given that the current question directly influences answer generation, we further decompose the confounders into Uh1,Uh2 and Uc1,Uc2, based on their relevance to the current question. Specifically, Uh1 and Uc1 represent dialog history and high-frequency words that are highly correlated with the current question, while Uh2 and Uc2 are sampled from dialog history and words with low relevance to the current question. Through a comprehensive evaluation and comparison of all components, we demonstrate the necessity of jointly considering both Uh and Uc. Fine-grained deconfounding, particularly with respect to the current question, proves to be more effective. Ablation studies, quantitative results, and visualizations further confirm the effectiveness of the proposed method.
基于知识的视觉对话是一项具有挑战性的视觉语言任务,其中智能体根据输入的图像和相应的常识知识与人类进行对话以回答问题。基于因果图的去偏方法逐渐引起了视觉对话领域的广泛关注,并取得了令人瞩目的成就。然而,现有的研究主要集中在粗粒度的反发现上,缺乏对偏差的原则性分析。在本文中,我们提出了一个细粒度的反奠基研究:(1)我们从两个角度定义混淆。第一个是用户偏好(表示为Uh),它来源于人类注释的对话历史,这可能会在问题和答案之间引入虚假的相关性。第二种是常识性语言偏差(表示为Uc),其中某些单词在检索的常识性知识中出现得如此频繁,以至于模型倾向于记住这些模式,从而在常识性知识和答案之间建立虚假的相关性。(2)考虑到当前问题直接影响答案的生成,我们根据与当前问题的相关性,将混杂因素进一步分解为Uh1,Uh2和Uc1,Uc2。其中,Uh1和Uc1代表对话历史和与当前问题高度相关的高频词,Uh2和Uc2代表对话历史和与当前问题相关性较低的词。通过对所有组成部分的综合评估和比较,我们证明了联合考虑Uh和Uc的必要性。细粒度的拆解,特别是对于当前的问题,被证明是更有效的。消融研究、定量结果和可视化进一步证实了所提出方法的有效性。
{"title":"A fine-grained deconfounding study for knowledge-based visual dialog","authors":"An-An Liu ,&nbsp;Quanhan Wu ,&nbsp;Chenxi Huang ,&nbsp;Chao Xue ,&nbsp;Xianzhu Liu ,&nbsp;Ning Xu","doi":"10.1016/j.visinf.2024.09.007","DOIUrl":"10.1016/j.visinf.2024.09.007","url":null,"abstract":"<div><div>Knowledge-based Visual Dialog is a challenging vision-language task, where an agent engages in dialog to answer questions with humans based on the input image and corresponding commonsense knowledge. The debiasing methods based on causal graphs have gradually sparked much attention in the field of Visual Dialog (VD), yielding impressive achievements. However, existing studies focus on the coarse-grained deconfounding, which lacks a principled analysis of the bias. In this paper, we propose a fined-grained study of deconfounding on: (1) We define the confounder from two perspectives. The first is user preference (denoted as <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi></mrow></msub></math></span>), derived from human-annotated dialog history, which may introduce spurious correlations between questions and answers. The second is commonsense language bias (denoted as <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi></mrow></msub></math></span>), where certain words appear so frequently in the retrieved commonsense knowledge that the model tends to memorize these patterns, thereby establishing spurious correlations between the commonsense knowledge and the answers. (2) Given that the current question directly influences answer generation, we further decompose the confounders into <span><math><mrow><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi><mn>1</mn></mrow></msub><mo>,</mo><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi><mn>2</mn></mrow></msub></mrow></math></span> and <span><math><mrow><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi><mn>1</mn></mrow></msub><mo>,</mo><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi><mn>2</mn></mrow></msub></mrow></math></span>, based on their relevance to the current question. Specifically, <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi><mn>1</mn></mrow></msub></math></span> and <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi><mn>1</mn></mrow></msub></math></span> represent dialog history and high-frequency words that are highly correlated with the current question, while <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi><mn>2</mn></mrow></msub></math></span> and <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi><mn>2</mn></mrow></msub></math></span> are sampled from dialog history and words with low relevance to the current question. Through a comprehensive evaluation and comparison of all components, we demonstrate the necessity of jointly considering both <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>h</mi></mrow></msub></math></span> and <span><math><msub><mrow><mi>U</mi></mrow><mrow><mi>c</mi></mrow></msub></math></span>. Fine-grained deconfounding, particularly with respect to the current question, proves to be more effective. Ablation studies, quantitative results, and visualizations further confirm the effectiveness of the proposed method.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 36-47"},"PeriodicalIF":3.8,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143098852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent CAD 2.0 智能 CAD 2.0
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-10-09 DOI: 10.1016/j.visinf.2024.10.001
Qiang Zou, Yingcai Wu, Zhenyu Liu, Weiwei Xu, Shuming Gao
Integrating modern artificial intelligence (AI) techniques, particularly generative AI, holds the promise of revolutionizing computer-aided design (CAD) tools and the engineering design process. However, the direction of “AI+CAD” remains unclear: how will the current generation of intelligent CAD (ICAD) differ from its predecessor in the 1980s and 1990s, what strategic pathways should researchers and engineers pursue for its implementation, and what potential technical challenges might arise?
As an attempt to address these questions, this paper investigates the transformative role of modern AI techniques in advancing CAD towards ICAD. It first analyzes the design process and reconsiders the roles AI techniques can assume in this process, highlighting how they can restructure the path humans, computers, and designs interact with each other. The primary conclusion is that ICAD systems should assume an intensional rather than extensional role in the design process. This offers insights into the evaluation of the previous generation of ICAD (ICAD 1.0) and outlines a prospective framework and trajectory for the next generation of ICAD (ICAD 2.0).
整合现代人工智能(AI)技术,尤其是生成式人工智能,有望彻底改变计算机辅助设计(CAD)工具和工程设计流程。然而,"AI+CAD "的发展方向仍不明确:当前一代的智能 CAD(ICAD)与上世纪八九十年代的前一代产品有何不同,研究人员和工程师应采取何种战略途径来实现这一目标,以及可能出现哪些潜在的技术挑战?本文首先分析了设计过程,并重新考虑了人工智能技术在这一过程中可以发挥的作用,重点介绍了人工智能技术如何重组人类、计算机和设计之间的交互路径。主要结论是,ICAD 系统应在设计过程中扮演内向型而非外向型的角色。这为评估上一代 ICAD(ICAD 1.0)提供了见解,并为下一代 ICAD(ICAD 2.0)勾勒出了一个前瞻性的框架和轨迹。
{"title":"Intelligent CAD 2.0","authors":"Qiang Zou,&nbsp;Yingcai Wu,&nbsp;Zhenyu Liu,&nbsp;Weiwei Xu,&nbsp;Shuming Gao","doi":"10.1016/j.visinf.2024.10.001","DOIUrl":"10.1016/j.visinf.2024.10.001","url":null,"abstract":"<div><div>Integrating modern artificial intelligence (AI) techniques, particularly generative AI, holds the promise of revolutionizing computer-aided design (CAD) tools and the engineering design process. However, the direction of “AI+CAD” remains unclear: how will the current generation of intelligent CAD (ICAD) differ from its predecessor in the 1980s and 1990s, what strategic pathways should researchers and engineers pursue for its implementation, and what potential technical challenges might arise?</div><div>As an attempt to address these questions, this paper investigates the transformative role of modern AI techniques in advancing CAD towards ICAD. It first analyzes the design process and reconsiders the roles AI techniques can assume in this process, highlighting how they can restructure the path humans, computers, and designs interact with each other. The primary conclusion is that ICAD systems should assume an intensional rather than extensional role in the design process. This offers insights into the evaluation of the previous generation of ICAD (ICAD 1.0) and outlines a prospective framework and trajectory for the next generation of ICAD (ICAD 2.0).</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 4","pages":"Pages 1-12"},"PeriodicalIF":3.8,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DPKnob: A visual analysis approach to risk-aware formulation of differential privacy schemes for data query scenarios DPKnob:针对数据查询场景,采用可视化分析方法制定具有风险意识的差异化隐私方案
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 DOI: 10.1016/j.visinf.2024.09.002
Shuangcheng Jiao , Jiang Cheng , Zhaosong Huang , Tong Li , Tiankai Xie , Wei Chen , Yuxin Ma , Xumeng Wang
Differential privacy is an essential approach for privacy preservation in data queries. However, users face a significant challenge in selecting an appropriate privacy scheme, as they struggle to balance the utility of query results with the preservation of diverse individual privacy. Customizing a privacy scheme becomes even more complex in dealing with queries that involve multiple data attributes. When adversaries attempt to breach privacy firewalls by conducting multiple regular data queries with various attribute values, data owners must arduously discern unpredictable disclosure risks and construct suitable privacy schemes. In this paper, we propose a visual analysis approach for formulating privacy schemes of differential privacy. Our approach supports the identification and simulation of potential privacy attacks in querying statistical results of multi-dimensional databases. We also developed a prototype system, called DPKnob, which integrates multiple coordinated views. DPKnob not only allows users to interactively assess and explore privacy exposure risks by browsing high-risk attacks, but also facilitates an iterative process for formulating and optimizing privacy schemes based on differential privacy. This iterative process allows users to compare different schemes, refine their expectations of privacy and utility, and ultimately establish a well-balanced privacy scheme. The effectiveness of this study is verified by a user study and two case studies with real-world datasets.
差异隐私是数据查询中保护隐私的重要方法。然而,用户在选择合适的隐私方案时面临着巨大的挑战,因为他们很难在查询结果的实用性和保护不同个人隐私之间取得平衡。在处理涉及多个数据属性的查询时,定制隐私方案变得更加复杂。当对手试图通过使用各种属性值进行多个常规数据查询来突破隐私防火墙时,数据所有者必须努力辨别不可预测的泄露风险,并构建合适的隐私方案。在本文中,我们提出了一种可视化分析方法,用于制定差分隐私方案。我们的方法支持在查询多维数据库统计结果时识别和模拟潜在的隐私攻击。我们还开发了一个名为 DPKnob 的原型系统,该系统集成了多个协调视图。DPKnob 不仅允许用户通过浏览高风险攻击来交互式地评估和探索隐私暴露风险,还促进了基于差异隐私制定和优化隐私方案的迭代过程。这种迭代过程允许用户比较不同的方案,完善他们对隐私和效用的预期,并最终建立一个平衡的隐私方案。这项研究的有效性通过一项用户研究和两项真实数据集案例研究得到了验证。
{"title":"DPKnob: A visual analysis approach to risk-aware formulation of differential privacy schemes for data query scenarios","authors":"Shuangcheng Jiao ,&nbsp;Jiang Cheng ,&nbsp;Zhaosong Huang ,&nbsp;Tong Li ,&nbsp;Tiankai Xie ,&nbsp;Wei Chen ,&nbsp;Yuxin Ma ,&nbsp;Xumeng Wang","doi":"10.1016/j.visinf.2024.09.002","DOIUrl":"10.1016/j.visinf.2024.09.002","url":null,"abstract":"<div><div>Differential privacy is an essential approach for privacy preservation in data queries. However, users face a significant challenge in selecting an appropriate privacy scheme, as they struggle to balance the utility of query results with the preservation of diverse individual privacy. Customizing a privacy scheme becomes even more complex in dealing with queries that involve multiple data attributes. When adversaries attempt to breach privacy firewalls by conducting multiple regular data queries with various attribute values, data owners must arduously discern unpredictable disclosure risks and construct suitable privacy schemes. In this paper, we propose a visual analysis approach for formulating privacy schemes of differential privacy. Our approach supports the identification and simulation of potential privacy attacks in querying statistical results of multi-dimensional databases. We also developed a prototype system, called DPKnob, which integrates multiple coordinated views. DPKnob not only allows users to interactively assess and explore privacy exposure risks by browsing high-risk attacks, but also facilitates an iterative process for formulating and optimizing privacy schemes based on differential privacy. This iterative process allows users to compare different schemes, refine their expectations of privacy and utility, and ultimately establish a well-balanced privacy scheme. The effectiveness of this study is verified by a user study and two case studies with real-world datasets.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 3","pages":"Pages 42-52"},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142358092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AvatarWild: Fully controllable head avatars in the wild AvatarWild:完全可控的野生头像
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 DOI: 10.1016/j.visinf.2024.09.001
Shaoxu Meng , Tong Wu , Fang-Lue Zhang , Shu-Yu Chen , Yuewen Ma , Wenbo Hu , Lin Gao
Recent advancements in the field have resulted in significant progress in achieving realistic head reconstruction and manipulation using neural radiance fields (NeRF). Despite these advances, capturing intricate facial details remains a persistent challenge. Moreover, casually captured input, involving both head poses and camera movements, introduces additional difficulties to existing methods of head avatar reconstruction. To address the challenge posed by video data captured with camera motion, we propose a novel method, AvatarWild, for reconstructing head avatars from monocular videos taken by consumer devices. Notably, our approach decouples the camera pose and head pose, allowing reconstructed avatars to be visualized with different poses and expressions from novel viewpoints. To enhance the visual quality of the reconstructed facial avatar, we introduce a view-dependent detail enhancement module designed to augment local facial details without compromising viewpoint consistency. Our method demonstrates superior performance compared to existing approaches, as evidenced by reconstruction and animation results on both multi-view and single-view datasets. Remarkably, our approach stands out by exclusively relying on video data captured by portable devices, such as smartphones. This not only underscores the practicality of our method but also extends its applicability to real-world scenarios where accessibility and ease of data capture are crucial.
该领域的最新进展是利用神经辐射场(NeRF)实现逼真的头部重建和操作。尽管取得了这些进展,但捕捉复杂的面部细节仍是一项长期挑战。此外,随手捕捉的输入信息涉及头部姿势和摄像机运动,给现有的头像重建方法带来了更多困难。为了应对相机运动视频数据带来的挑战,我们提出了一种新方法 AvatarWild,用于从消费类设备拍摄的单目视频中重建头像。值得注意的是,我们的方法将摄像机姿势和头部姿势分离开来,允许从新的视角以不同的姿势和表情可视化重建的头像。为了提高重建后的面部头像的视觉质量,我们引入了一个视图相关细节增强模块,旨在增强局部面部细节而不影响视点一致性。我们的方法在多视角和单视角数据集上的重建和动画结果表明,与现有方法相比,我们的方法具有更优越的性能。值得注意的是,我们的方法完全依赖于智能手机等便携设备捕获的视频数据,因此脱颖而出。这不仅强调了我们方法的实用性,还将其适用范围扩展到了对数据获取的可及性和便捷性至关重要的现实世界场景中。
{"title":"AvatarWild: Fully controllable head avatars in the wild","authors":"Shaoxu Meng ,&nbsp;Tong Wu ,&nbsp;Fang-Lue Zhang ,&nbsp;Shu-Yu Chen ,&nbsp;Yuewen Ma ,&nbsp;Wenbo Hu ,&nbsp;Lin Gao","doi":"10.1016/j.visinf.2024.09.001","DOIUrl":"10.1016/j.visinf.2024.09.001","url":null,"abstract":"<div><div>Recent advancements in the field have resulted in significant progress in achieving realistic head reconstruction and manipulation using neural radiance fields (NeRF). Despite these advances, capturing intricate facial details remains a persistent challenge. Moreover, casually captured input, involving both head poses and camera movements, introduces additional difficulties to existing methods of head avatar reconstruction. To address the challenge posed by video data captured with camera motion, we propose a novel method, AvatarWild, for reconstructing head avatars from monocular videos taken by consumer devices. Notably, our approach decouples the camera pose and head pose, allowing reconstructed avatars to be visualized with different poses and expressions from novel viewpoints. To enhance the visual quality of the reconstructed facial avatar, we introduce a view-dependent detail enhancement module designed to augment local facial details without compromising viewpoint consistency. Our method demonstrates superior performance compared to existing approaches, as evidenced by reconstruction and animation results on both multi-view and single-view datasets. Remarkably, our approach stands out by exclusively relying on video data captured by portable devices, such as smartphones. This not only underscores the practicality of our method but also extends its applicability to real-world scenarios where accessibility and ease of data capture are crucial.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 3","pages":"Pages 96-106"},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MILG: Realistic lip-sync video generation with audio-modulated image inpainting MILG:利用音频调制图像绘制生成逼真的唇音同步视频
IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-01 DOI: 10.1016/j.visinf.2024.08.002
Han Bao , Xuhong Zhang , Qinying Wang , Kangming Liang , Zonghui Wang , Shouling Ji , Wenzhi Chen
Existing lip synchronization (lip-sync) methods generate accurately synchronized mouths and faces in a generated video. However, they still confront the problem of artifacts in regions of non-interest (RONI), e.g., background and other parts of a face, which decreases the overall visual quality. To solve these problems, we innovatively introduce diverse image inpainting to lip-sync generation. We propose Modulated Inpainting Lip-sync GAN (MILG), an audio-constraint inpainting network to predict synchronous mouths. MILG utilizes prior knowledge of RONI and audio sequences to predict lip shape instead of image generation, which can keep the RONI consistent. Specifically, we integrate modulated spatially probabilistic diversity normalization (MSPD Norm) in our inpainting network, which helps the network generate fine-grained diverse mouth movements guided by the continuous audio features. Furthermore, to lower the training overhead, we modify the contrastive loss in lip-sync to support small-batch-size and few-sample training. Extensive experiments demonstrate that our approach outperforms the existing state-of-the-art of image quality and authenticity while keeping lip-sync.
现有的唇部同步(lip-sync)方法能在生成的视频中准确同步嘴部和面部。然而,这些方法仍然面临着非感兴趣区域(RONI)的伪像问题,例如背景和脸部的其他部分,从而降低了整体视觉质量。为了解决这些问题,我们创新性地将多样化的图像绘制引入到唇音生成中。我们提出了调制内绘唇同步 GAN(MILG),这是一种音频约束内绘网络,用于预测同步口型。MILG 利用 RONI 和音频序列的先验知识来预测唇形,而不是生成图像,这样可以保持 RONI 的一致性。具体来说,我们将调制空间概率多样性归一化(MSPD Norm)集成到我们的内绘制网络中,这有助于网络在连续音频特征的引导下生成细粒度的多样化嘴部动作。此外,为了降低训练开销,我们修改了唇部同步中的对比度损失,以支持小批量和少样本训练。大量实验证明,我们的方法在保持唇语同步的同时,在图像质量和真实性方面都优于现有的最先进方法。
{"title":"MILG: Realistic lip-sync video generation with audio-modulated image inpainting","authors":"Han Bao ,&nbsp;Xuhong Zhang ,&nbsp;Qinying Wang ,&nbsp;Kangming Liang ,&nbsp;Zonghui Wang ,&nbsp;Shouling Ji ,&nbsp;Wenzhi Chen","doi":"10.1016/j.visinf.2024.08.002","DOIUrl":"10.1016/j.visinf.2024.08.002","url":null,"abstract":"<div><div>Existing lip synchronization (lip-sync) methods generate accurately synchronized mouths and faces in a generated video. However, they still confront the problem of artifacts in regions of non-interest (RONI), <em>e.g.</em>, background and other parts of a face, which decreases the overall visual quality. To solve these problems, we innovatively introduce diverse image inpainting to lip-sync generation. We propose Modulated Inpainting Lip-sync GAN (MILG), an audio-constraint inpainting network to predict synchronous mouths. MILG utilizes prior knowledge of RONI and audio sequences to predict lip shape instead of image generation, which can keep the RONI consistent. Specifically, we integrate modulated spatially probabilistic diversity normalization (MSPD Norm) in our inpainting network, which helps the network generate fine-grained diverse mouth movements guided by the continuous audio features. Furthermore, to lower the training overhead, we modify the contrastive loss in lip-sync to support small-batch-size and few-sample training. Extensive experiments demonstrate that our approach outperforms the existing state-of-the-art of image quality and authenticity while keeping lip-sync.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 3","pages":"Pages 71-81"},"PeriodicalIF":3.8,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142417742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Visual Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1