首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
A Comprehensive Evaluation of Arbitrary Image Style Transfer Methods. 全面评估任意图像风格转换方法
Pub Date : 2024-09-25 DOI: 10.1109/TVCG.2024.3466964
Zijun Zhou, Fan Tang, Yuxin Zhang, Oliver Deussen, Juan Cao, Weiming Dong, Xiangtao Li, Tong-Yee Lee

Despite the remarkable process in the field of arbitrary image style transfer (AST), inconsistent evaluation continues to plague style transfer research. Existing methods often suffer from limited objective evaluation and inconsistent subjective feedback, hindering reliable comparisons among AST variants. In this study, we propose a multi-granularity assessment system that combines standardized objective and subjective evaluations. We collect a fine-grained dataset considering a range of image contexts such as different scenes, object complexities, and rich parsing information from multiple sources. Objective and subjective studies are conducted using the collected dataset. Specifically, we innovate on traditional subjective studies by developing an online evaluation system utilizing a combination of point-wise, pair-wise, and group-wise questionnaires. Finally, we bridge the gap between objective and subjective evaluations by examining the consistency between the results from the two studies. We experimentally evaluate CNN-based, flow-based, transformer-based, and diffusion-based AST methods by the proposed multi-granularity assessment system, which lays the foundation for a reliable and robust evaluation. Providing standardized measures, objective data, and detailed subjective feedback empowers researchers to make informed comparisons and drive innovation in this rapidly evolving field. Finally, for the collected dataset and our online evaluation system, please see http://ivc.ia.ac.cn.

尽管任意图像风格转换(AST)领域取得了令人瞩目的进展,但不一致的评估仍然困扰着风格转换研究。现有的方法往往存在客观评价有限和主观反馈不一致的问题,阻碍了对 AST 变体进行可靠的比较。在本研究中,我们提出了一种结合标准化客观评价和主观评价的多粒度评估系统。我们收集了一个细粒度数据集,其中考虑到了一系列图像上下文,如不同场景、对象复杂性以及来自多个来源的丰富解析信息。利用收集到的数据集进行客观和主观研究。具体来说,我们在传统主观研究的基础上进行了创新,开发了一个在线评估系统,综合利用了点式、对式和组式问卷。最后,我们通过检验两项研究结果的一致性,缩小了客观评价与主观评价之间的差距。我们通过所提出的多粒度评估系统对基于 CNN、基于流量、基于变压器和基于扩散的 AST 方法进行了实验性评估,为可靠、稳健的评估奠定了基础。通过提供标准化的测量方法、客观数据和详细的主观反馈,研究人员可以进行有依据的比较,并推动这一快速发展领域的创新。最后,有关收集的数据集和我们的在线评估系统,请参见 http://ivc.ia.ac.cn。
{"title":"A Comprehensive Evaluation of Arbitrary Image Style Transfer Methods.","authors":"Zijun Zhou, Fan Tang, Yuxin Zhang, Oliver Deussen, Juan Cao, Weiming Dong, Xiangtao Li, Tong-Yee Lee","doi":"10.1109/TVCG.2024.3466964","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3466964","url":null,"abstract":"<p><p>Despite the remarkable process in the field of arbitrary image style transfer (AST), inconsistent evaluation continues to plague style transfer research. Existing methods often suffer from limited objective evaluation and inconsistent subjective feedback, hindering reliable comparisons among AST variants. In this study, we propose a multi-granularity assessment system that combines standardized objective and subjective evaluations. We collect a fine-grained dataset considering a range of image contexts such as different scenes, object complexities, and rich parsing information from multiple sources. Objective and subjective studies are conducted using the collected dataset. Specifically, we innovate on traditional subjective studies by developing an online evaluation system utilizing a combination of point-wise, pair-wise, and group-wise questionnaires. Finally, we bridge the gap between objective and subjective evaluations by examining the consistency between the results from the two studies. We experimentally evaluate CNN-based, flow-based, transformer-based, and diffusion-based AST methods by the proposed multi-granularity assessment system, which lays the foundation for a reliable and robust evaluation. Providing standardized measures, objective data, and detailed subjective feedback empowers researchers to make informed comparisons and drive innovation in this rapidly evolving field. Finally, for the collected dataset and our online evaluation system, please see http://ivc.ia.ac.cn.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets. PhenoFlow:用于探索大型复杂卒中数据集的人工-LLM驱动可视分析系统。
Pub Date : 2024-09-24 DOI: 10.1109/TVCG.2024.3456215
Jaeyoung Kim, Sihyeon Lee, Hyeon Jeon, Keon-Joo Lee, Hee-Joon Bae, Bohyoung Kim, Jinwook Seo

Acute stroke demands prompt diagnosis and treatment to achieve optimal patient outcomes. However, the intricate and irregular nature of clinical data associated with acute stroke, particularly blood pressure (BP) measurements, presents substantial obstacles to effective visual analytics and decision-making. Through a year-long collaboration with experienced neurologists, we developed PhenoFlow, a visual analytics system that leverages the collaboration between human and Large Language Models (LLMs) to analyze the extensive and complex data of acute ischemic stroke patients. PhenoFlow pioneers an innovative workflow, where the LLM serves as a data wrangler while neurologists explore and supervise the output using visualizations and natural language interactions. This approach enables neurologists to focus more on decision-making with reduced cognitive load. To protect sensitive patient information, PhenoFlow only utilizes metadata to make inferences and synthesize executable codes, without accessing raw patient data. This ensures that the results are both reproducible and interpretable while maintaining patient privacy. The system incorporates a slice-and-wrap design that employs temporal folding to create an overlaid circular visualization. Combined with a linear bar graph, this design aids in exploring meaningful patterns within irregularly measured BP data. Through case studies, PhenoFlow has demonstrated its capability to support iterative analysis of extensive clinical datasets, reducing cognitive load and enabling neurologists to make well-informed decisions. Grounded in long-term collaboration with domain experts, our research demonstrates the potential of utilizing LLMs to tackle current challenges in data-driven clinical decision-making for acute ischemic stroke patients.

急性中风需要及时诊断和治疗,以实现最佳的患者预后。然而,与急性中风相关的临床数据,尤其是血压(BP)测量数据错综复杂且不规则,给有效的可视化分析和决策带来了巨大障碍。通过与经验丰富的神经科医生长达一年的合作,我们开发出了 PhenoFlow,这是一种可视化分析系统,利用人类与大型语言模型(LLMs)之间的协作来分析急性缺血性中风患者的大量复杂数据。PhenoFlow 首创了一种创新的工作流程,即 LLM 充当数据处理员,而神经学家则利用可视化和自然语言交互来探索和监督输出结果。这种方法能让神经科医生更专注于决策,减少认知负荷。为了保护敏感的患者信息,PhenoFlow 只利用元数据进行推断和合成可执行代码,而不访问原始患者数据。这确保了结果的可重复性和可解释性,同时维护了患者隐私。该系统采用了切片和缠绕设计,利用时间折叠来创建叠加的圆形可视化效果。这种设计与线性条形图相结合,有助于在不规则的血压测量数据中探索有意义的模式。通过案例研究,PhenoFlow 已证明其有能力支持对大量临床数据集进行迭代分析,减少认知负荷,使神经科医生能够做出明智的决策。在与领域专家长期合作的基础上,我们的研究证明了利用 LLMs 应对当前急性缺血性中风患者数据驱动临床决策挑战的潜力。
{"title":"PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets.","authors":"Jaeyoung Kim, Sihyeon Lee, Hyeon Jeon, Keon-Joo Lee, Hee-Joon Bae, Bohyoung Kim, Jinwook Seo","doi":"10.1109/TVCG.2024.3456215","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456215","url":null,"abstract":"<p><p>Acute stroke demands prompt diagnosis and treatment to achieve optimal patient outcomes. However, the intricate and irregular nature of clinical data associated with acute stroke, particularly blood pressure (BP) measurements, presents substantial obstacles to effective visual analytics and decision-making. Through a year-long collaboration with experienced neurologists, we developed PhenoFlow, a visual analytics system that leverages the collaboration between human and Large Language Models (LLMs) to analyze the extensive and complex data of acute ischemic stroke patients. PhenoFlow pioneers an innovative workflow, where the LLM serves as a data wrangler while neurologists explore and supervise the output using visualizations and natural language interactions. This approach enables neurologists to focus more on decision-making with reduced cognitive load. To protect sensitive patient information, PhenoFlow only utilizes metadata to make inferences and synthesize executable codes, without accessing raw patient data. This ensures that the results are both reproducible and interpretable while maintaining patient privacy. The system incorporates a slice-and-wrap design that employs temporal folding to create an overlaid circular visualization. Combined with a linear bar graph, this design aids in exploring meaningful patterns within irregularly measured BP data. Through case studies, PhenoFlow has demonstrated its capability to support iterative analysis of extensive clinical datasets, reducing cognitive load and enabling neurologists to make well-informed decisions. Grounded in long-term collaboration with domain experts, our research demonstrates the potential of utilizing LLMs to tackle current challenges in data-driven clinical decision-making for acute ischemic stroke patients.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction. SLInterpreter:基于 GNN 的合成致命性预测的探索性和迭代式人机协作系统。
Pub Date : 2024-09-24 DOI: 10.1109/TVCG.2024.3456325
Haoran Jiang, Shaohan Shi, Shuhao Zhang, Jie Zheng, Quan Li

Synthetic Lethal (SL) relationships, though rare among the vast array of gene combinations, hold substantial promise for targeted cancer therapy. Despite advancements in AI model accuracy, there is still a significant need among domain experts for interpretive paths and mechanism explorations that align better with domain-specific knowledge, particularly due to the high costs of experimentation. To address this gap, we propose an iterative Human-AI collaborative framework with two key components: 1) HumanEngaged Knowledge Graph Refinement based on Metapath Strategies, which leverages insights from interpretive paths and domain expertise to refine the knowledge graph through metapath strategies with appropriate granularity. 2) Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis, which aids experts in organizing and comparing predictions and interpretive paths across different granularities, uncovering new SL relationships, enhancing result interpretation, and elucidating potential mechanisms inferred by Graph Neural Network (GNN) models. These components cyclically optimize model predictions and mechanism explorations, enhancing expert involvement and intervention to build trust. Facilitated by SLInterpreter, this framework ensures that newly generated interpretive paths increasingly align with domain knowledge and adhere more closely to real-world biological principles through iterative Human-AI collaboration. We evaluate the framework's efficacy through a case study and expert interviews.

合成致命(SL)关系虽然在大量基因组合中非常罕见,但却为癌症靶向治疗带来了巨大希望。尽管人工智能模型的准确性有所提高,但领域专家仍然非常需要与特定领域知识更加一致的解释路径和机制探索,特别是由于实验成本高昂。为了弥补这一差距,我们提出了一个迭代式人机协作框架,其中包含两个关键组成部分:1) 基于元路径策略的人类参与的知识图谱完善,利用从解释路径和领域专业知识中获得的洞察力,通过具有适当粒度的元路径策略完善知识图谱。2)跨粒度 SL 解释增强和机制分析,帮助专家组织和比较不同粒度的预测和解释路径,发现新的 SL 关系,增强结果解释,并阐明图神经网络(GNN)模型推断出的潜在机制。这些组件循环优化模型预测和机制探索,加强专家参与和干预以建立信任。在 SLInterpreter 的协助下,该框架确保新生成的解释路径越来越符合领域知识,并通过迭代式人机协作更贴近真实世界的生物学原理。我们通过案例研究和专家访谈来评估该框架的功效。
{"title":"SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction.","authors":"Haoran Jiang, Shaohan Shi, Shuhao Zhang, Jie Zheng, Quan Li","doi":"10.1109/TVCG.2024.3456325","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456325","url":null,"abstract":"<p><p>Synthetic Lethal (SL) relationships, though rare among the vast array of gene combinations, hold substantial promise for targeted cancer therapy. Despite advancements in AI model accuracy, there is still a significant need among domain experts for interpretive paths and mechanism explorations that align better with domain-specific knowledge, particularly due to the high costs of experimentation. To address this gap, we propose an iterative Human-AI collaborative framework with two key components: 1) HumanEngaged Knowledge Graph Refinement based on Metapath Strategies, which leverages insights from interpretive paths and domain expertise to refine the knowledge graph through metapath strategies with appropriate granularity. 2) Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis, which aids experts in organizing and comparing predictions and interpretive paths across different granularities, uncovering new SL relationships, enhancing result interpretation, and elucidating potential mechanisms inferred by Graph Neural Network (GNN) models. These components cyclically optimize model predictions and mechanism explorations, enhancing expert involvement and intervention to build trust. Facilitated by SLInterpreter, this framework ensures that newly generated interpretive paths increasingly align with domain knowledge and adhere more closely to real-world biological principles through iterative Human-AI collaboration. We evaluate the framework's efficacy through a case study and expert interviews.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks. 循环:利用出处和可视化支持笔记本中的探索性数据分析。
Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3456186
Klaus Eckelt, Kiran Gadhave, Alexander Lex, Marc Streit

Exploratory data science is an iterative process of obtaining, cleaning, profiling, analyzing, and interpreting data. This cyclical way of working creates challenges within the linear structure of computational notebooks, leading to issues with code quality, recall, and reproducibility. To remedy this, we present Loops, a set of visual support techniques for iterative and exploratory data analysis in computational notebooks. Loops leverages provenance information to visualize the impact of changes made within a notebook. In visualizations of the notebook provenance, we trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and their respective differences. Analysts can explore these differences in detail in a separate view. Loops not only makes the analysis process transparent but also supports analysts in their data science work by showing the effects of changes and facilitating comparison of multiple versions. We demonstrate our approach's utility and potential impact in two use cases and feedback from notebook users from various backgrounds. This paper and all supplemental materials are available at https://osf.io/79eyn.

探索性数据科学是一个获取、清理、剖析、分析和解释数据的迭代过程。这种循环往复的工作方式给计算笔记本的线性结构带来了挑战,导致代码质量、召回率和可重复性方面的问题。为了解决这个问题,我们提出了循环(Loops)技术,这是一套用于计算笔记本中迭代和探索性数据分析的可视化支持技术。Loops 利用来源信息来可视化笔记本中的变更所产生的影响。在可视化笔记本来源的过程中,我们可以追踪笔记本随时间的演变,并突出不同版本之间的差异。Loops 可视化代码、markdown、表格、可视化和图像的出处及其各自的差异。分析师可以在单独的视图中详细探索这些差异。Loops 不仅使分析过程透明化,还通过显示更改的影响和方便比较多个版本,为分析师的数据科学工作提供支持。我们通过两个使用案例和来自不同背景的笔记本用户的反馈,展示了我们的方法的实用性和潜在影响。本文及所有补充材料可在 https://osf.io/79eyn 上查阅。
{"title":"Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks.","authors":"Klaus Eckelt, Kiran Gadhave, Alexander Lex, Marc Streit","doi":"10.1109/TVCG.2024.3456186","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456186","url":null,"abstract":"<p><p>Exploratory data science is an iterative process of obtaining, cleaning, profiling, analyzing, and interpreting data. This cyclical way of working creates challenges within the linear structure of computational notebooks, leading to issues with code quality, recall, and reproducibility. To remedy this, we present Loops, a set of visual support techniques for iterative and exploratory data analysis in computational notebooks. Loops leverages provenance information to visualize the impact of changes made within a notebook. In visualizations of the notebook provenance, we trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and their respective differences. Analysts can explore these differences in detail in a separate view. Loops not only makes the analysis process transparent but also supports analysts in their data science work by showing the effects of changes and facilitating comparison of multiple versions. We demonstrate our approach's utility and potential impact in two use cases and feedback from notebook users from various backgrounds. This paper and all supplemental materials are available at https://osf.io/79eyn.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions. StuGPTViz:了解学生聊天互动的可视化分析方法。
Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3456363
Zixin Chen, Jiachen Wang, Meng Xia, Kento Shigyo, Dingdong Liu, Rong Zhang, Huamin Qu

The integration of Large Language Models (LLMs), especially ChatGPT, into education is poised to revolutionize students' learning experiences by introducing innovative conversational learning methodologies. To empower students to fully leverage the capabilities of ChatGPT in educational scenarios, understanding students' interaction patterns with ChatGPT is crucial for instructors. However, this endeavor is challenging due to the absence of datasets focused on student-ChatGPT conversations and the complexities in identifying and analyzing the evolutional interaction patterns within conversations. To address these challenges, we collected conversational data from 48 students interacting with ChatGPT in a master's level data visualization course over one semester. We then developed a coding scheme, grounded in the literature on cognitive levels and thematic analysis, to categorize students' interaction patterns with ChatGPT. Furthermore, we present a visual analytics system, StuGPTViz, that tracks and compares temporal patterns in student prompts and the quality of ChatGPT's responses at multiple scales, revealing significant pedagogical insights for instructors. We validated the system's effectiveness through expert interviews with six data visualization instructors and three case studies. The results confirmed StuGPTViz's capacity to enhance educators' insights into the pedagogical value of ChatGPT. We also discussed the potential research opportunities of applying visual analytics in education and developing AI-driven personalized learning solutions.

将大语言模型(LLM),尤其是 ChatGPT 融入教育领域,有望通过引入创新的会话学习方法彻底改变学生的学习体验。为了让学生在教育场景中充分利用 ChatGPT 的功能,了解学生与 ChatGPT 的交互模式对教师来说至关重要。然而,由于缺乏专注于学生与 ChatGPT 会话的数据集,以及识别和分析会话中演化交互模式的复杂性,这项工作具有挑战性。为了应对这些挑战,我们收集了 48 名学生在一个学期的硕士数据可视化课程中与 ChatGPT 互动的对话数据。然后,我们在认知水平和主题分析文献的基础上开发了一套编码方案,用于对学生与 ChatGPT 的交互模式进行分类。此外,我们还介绍了一个可视化分析系统 StuGPTViz,该系统可在多个尺度上跟踪和比较学生提示的时间模式和 ChatGPT 的响应质量,为教师提供重要的教学启示。我们通过对六位数据可视化讲师的专家访谈和三项案例研究验证了该系统的有效性。结果证实,StuGPTViz 能够提高教育工作者对 ChatGPT 教学价值的洞察力。我们还讨论了将可视化分析应用于教育和开发人工智能驱动的个性化学习解决方案的潜在研究机会。
{"title":"StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions.","authors":"Zixin Chen, Jiachen Wang, Meng Xia, Kento Shigyo, Dingdong Liu, Rong Zhang, Huamin Qu","doi":"10.1109/TVCG.2024.3456363","DOIUrl":"10.1109/TVCG.2024.3456363","url":null,"abstract":"<p><p>The integration of Large Language Models (LLMs), especially ChatGPT, into education is poised to revolutionize students' learning experiences by introducing innovative conversational learning methodologies. To empower students to fully leverage the capabilities of ChatGPT in educational scenarios, understanding students' interaction patterns with ChatGPT is crucial for instructors. However, this endeavor is challenging due to the absence of datasets focused on student-ChatGPT conversations and the complexities in identifying and analyzing the evolutional interaction patterns within conversations. To address these challenges, we collected conversational data from 48 students interacting with ChatGPT in a master's level data visualization course over one semester. We then developed a coding scheme, grounded in the literature on cognitive levels and thematic analysis, to categorize students' interaction patterns with ChatGPT. Furthermore, we present a visual analytics system, StuGPTViz, that tracks and compares temporal patterns in student prompts and the quality of ChatGPT's responses at multiple scales, revealing significant pedagogical insights for instructors. We validated the system's effectiveness through expert interviews with six data visualization instructors and three case studies. The results confirmed StuGPTViz's capacity to enhance educators' insights into the pedagogical value of ChatGPT. We also discussed the potential research opportunities of applying visual analytics in education and developing AI-driven personalized learning solutions.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM. BEMTrace:从 BIM 导出建筑能源模型的可视化驱动方法。
Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3456315
Andreas Walch, Attila Szabo, Harald Steinlechner, Thomas Ortner, Eduard Groller, Johanna Schmidt

Building Information Modeling (BIM) describes a central data pool covering the entire life cycle of a construction project. Similarly, Building Energy Modeling (BEM) describes the process of using a 3D representation of a building as a basis for thermal simulations to assess the building's energy performance. This paper explores the intersection of BIM and BEM, focusing on the challenges and methodologies in converting BIM data into BEM representations for energy performance analysis. BEMTrace integrates 3D data wrangling techniques with visualization methodologies to enhance the accuracy and traceability of the BIM-to-BEM conversion process. Through parsing, error detection, and algorithmic correction of BIM data, our methods generate valid BEM models suitable for energy simulation. Visualization techniques provide transparent insights into the conversion process, aiding error identifcation, validation, and user comprehension. We introduce context-adaptive selections to facilitate user interaction and to show that the BEMTrace workfow helps users understand complex 3D data wrangling processes.

建筑信息模型(BIM)描述了一个涵盖建筑项目整个生命周期的中央数据池。同样,建筑能源建模(BEM)描述了使用建筑物的三维表示作为热模拟基础来评估建筑物能源性能的过程。本文探讨了 BIM 和 BEM 的交叉点,重点是将 BIM 数据转换为 BEM 表征用于能源性能分析的挑战和方法。BEMTrace 集成了三维数据处理技术和可视化方法,以提高 BIM 到 BEM 转换过程的准确性和可追溯性。通过对 BIM 数据进行解析、错误检测和算法修正,我们的方法可生成适用于能源模拟的有效 BEM 模型。可视化技术为转换过程提供了透明的视角,有助于错误识别、验证和用户理解。我们引入了上下文自适应选择,以促进用户互动,并表明 BEMTrace 工作流有助于用户理解复杂的三维数据处理过程。
{"title":"BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM.","authors":"Andreas Walch, Attila Szabo, Harald Steinlechner, Thomas Ortner, Eduard Groller, Johanna Schmidt","doi":"10.1109/TVCG.2024.3456315","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456315","url":null,"abstract":"<p><p>Building Information Modeling (BIM) describes a central data pool covering the entire life cycle of a construction project. Similarly, Building Energy Modeling (BEM) describes the process of using a 3D representation of a building as a basis for thermal simulations to assess the building's energy performance. This paper explores the intersection of BIM and BEM, focusing on the challenges and methodologies in converting BIM data into BEM representations for energy performance analysis. BEMTrace integrates 3D data wrangling techniques with visualization methodologies to enhance the accuracy and traceability of the BIM-to-BEM conversion process. Through parsing, error detection, and algorithmic correction of BIM data, our methods generate valid BEM models suitable for energy simulation. Visualization techniques provide transparent insights into the conversion process, aiding error identifcation, validation, and user comprehension. We introduce context-adaptive selections to facilitate user interaction and to show that the BEMTrace workfow helps users understand complex 3D data wrangling processes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic. DimBridge:用谓词逻辑交互式解释降维过程中的视觉模式
Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3456391
Brian Montambault, Gabriel Appleby, Jen Rogers, Camelia D Brumar, Mingwei Li, Remco Chang

Dimensionality reduction techniques are widely used for visualizing high-dimensional data. However, support for interpreting patterns of dimension reduction results in the context of the original data space is often insufficient. Consequently, users may struggle to extract insights from the projections. In this paper, we introduce DimBridge, a visual analytics tool that allows users to interact with visual patterns in a projection and retrieve corresponding data patterns. DimBridge supports several interactions, allowing users to perform various analyses, from contrasting multiple clusters to explaining complex latent structures. Leveraging first-order predicate logic, DimBridge identifies subspaces in the original dimensions relevant to a queried pattern and provides an interface for users to visualize and interact with them. We demonstrate how DimBridge can help users overcome the challenges associated with interpreting visual patterns in projections.

降维技术被广泛用于高维数据的可视化。然而,在原始数据空间的背景下解释降维结果模式的支持往往不足。因此,用户可能很难从投影中获得深刻的见解。在本文中,我们将介绍一种可视化分析工具--DimBridge,它允许用户与投影中的可视化模式进行交互,并检索相应的数据模式。DimBridge 支持多种交互,允许用户执行各种分析,从对比多个聚类到解释复杂的潜在结构。利用一阶谓词逻辑,DimBridge 可识别原始维度中与所查询模式相关的子空间,并为用户提供界面,使其可视化并与之互动。我们展示了 DimBridge 如何帮助用户克服与解释投影中的可视化模式相关的挑战。
{"title":"DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic.","authors":"Brian Montambault, Gabriel Appleby, Jen Rogers, Camelia D Brumar, Mingwei Li, Remco Chang","doi":"10.1109/TVCG.2024.3456391","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456391","url":null,"abstract":"<p><p>Dimensionality reduction techniques are widely used for visualizing high-dimensional data. However, support for interpreting patterns of dimension reduction results in the context of the original data space is often insufficient. Consequently, users may struggle to extract insights from the projections. In this paper, we introduce DimBridge, a visual analytics tool that allows users to interact with visual patterns in a projection and retrieve corresponding data patterns. DimBridge supports several interactions, allowing users to perform various analyses, from contrasting multiple clusters to explaining complex latent structures. Leveraging first-order predicate logic, DimBridge identifies subspaces in the original dimensions relevant to a queried pattern and provides an interface for users to visualize and interact with them. We demonstrate how DimBridge can help users overcome the challenges associated with interpreting visual patterns in projections.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GraspDiff: Grasping Generation for Hand-Object Interaction With Multimodal Guided Diffusion. GraspDiff:利用多模态引导扩散生成手与物体交互的抓取效果
Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3466190
Binghui Zuo, Zimeng Zhao, Wenqian Sun, Xiaohan Yuan, Zhipeng Yu, Yangang Wang

Grasping generation holds significant importance in both robotics and AI-generated content. While pure network paradigms based on VAEs or GANs ensure diversity in outcomes, they often fall short of achieving plausibility. Additionally, although those two-step paradigms that first predict contact and then optimize distance yield plausible results, they are always known to be time-consuming. This paper introduces a novel paradigm powered by DDPM, accommodating diverse modalities with varying interaction granularities as its generating conditions, including 3D object, contact affordance, and image content. Our key idea is that the iterative steps inherent to diffusion models can supplant the iterative optimization routines in existing optimization methods, thereby endowing the generated results from our method with both diversity and plausibility. Using the same training data, our paradigm achieves superior generation performance and competitive generation speed compared to optimization-based paradigms. Extensive experiments on both in-domain and out-of-domain objects demonstrate that our method receives significant improvement over the SOTA method. We will release the code for research purposes.

抓取生成对于机器人和人工智能生成的内容都具有重要意义。虽然基于 VAE 或 GAN 的纯网络范式可确保结果的多样性,但它们往往无法实现可信度。此外,虽然那些先预测接触然后优化距离的两步范式能产生可信的结果,但众所周知它们总是非常耗时。本文介绍了一种由 DDPM 驱动的新型范式,该范式的生成条件包括三维物体、接触能力和图像内容等,可适应不同的交互模式和不同的交互粒度。我们的主要想法是,扩散模型固有的迭代步骤可以取代现有优化方法中的迭代优化例程,从而使我们的方法产生的结果具有多样性和可信性。在使用相同训练数据的情况下,与基于优化的范式相比,我们的范式具有更优越的生成性能和更有竞争力的生成速度。对域内和域外对象的广泛实验表明,我们的方法比 SOTA 方法有显著改进。我们将为研究目的发布代码。
{"title":"GraspDiff: Grasping Generation for Hand-Object Interaction With Multimodal Guided Diffusion.","authors":"Binghui Zuo, Zimeng Zhao, Wenqian Sun, Xiaohan Yuan, Zhipeng Yu, Yangang Wang","doi":"10.1109/TVCG.2024.3466190","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3466190","url":null,"abstract":"<p><p>Grasping generation holds significant importance in both robotics and AI-generated content. While pure network paradigms based on VAEs or GANs ensure diversity in outcomes, they often fall short of achieving plausibility. Additionally, although those two-step paradigms that first predict contact and then optimize distance yield plausible results, they are always known to be time-consuming. This paper introduces a novel paradigm powered by DDPM, accommodating diverse modalities with varying interaction granularities as its generating conditions, including 3D object, contact affordance, and image content. Our key idea is that the iterative steps inherent to diffusion models can supplant the iterative optimization routines in existing optimization methods, thereby endowing the generated results from our method with both diversity and plausibility. Using the same training data, our paradigm achieves superior generation performance and competitive generation speed compared to optimization-based paradigms. Extensive experiments on both in-domain and out-of-domain objects demonstrate that our method receives significant improvement over the SOTA method. We will release the code for research purposes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-and-Present: Investigating the Use of Life-Size 2D Video Avatars in HMD-Based AR Teleconferencing. 真实与呈现:研究在基于 HMD 的 AR 电话会议中使用真人大小的 2D 视频头像。
Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3466554
Xuanyu Wang, Weizhan Zhang, Christian Sandor, Hongbo Fu

Augmented Reality (AR) teleconferencing allows spatially distributed users to interact with each other in 3D through agents in their own physical environments. Existing methods leveraging volumetric capturing and reconstruction can provide a high-fidelity experience but are often too complex and expensive for everyday use. Other solutions target mobile and effortless-to-setup teleconferencing on AR Head Mounted Displays (HMD). They directly transplant the conventional video conferencing onto an AR-HMD platform or use avatars to represent remote participants. However, they can only support either a high fidelity or a high level of co-presence. Moreover, the limited Field of View (FoV) of HMDs could further degrade users' immersive experience. To achieve a balance between fidelity and co-presence, we explore using life-size 2D video-based avatars (video avatars for short) in AR teleconferencing. Specifically, with the potential effect of FoV on users' perception of proximity, we first conducted a pilot study to explore the local-user-centered optimal placement of video avatars in small-group AR conversations. With the placement results, we then implement a proof-of-concept prototype of video-avatar-based teleconferencing. We conduct user evaluations with our prototype to verify its effectiveness in balancing fidelity and co-presence. Following the indication in the pilot study, we further quantitatively explore the effect of FoV size on the video avatar's optimal placement through a user study involving more FoV conditions in a VR-simulated environment. We regress placement models to serve as references for computationally determining video avatar placements in such teleconferencing applications on various existing AR HMDs and future ones with bigger FoVs.

增强现实(AR)远程会议允许空间分散的用户在自己的物理环境中通过代理进行三维互动。利用体积捕捉和重建的现有方法可以提供高保真体验,但往往过于复杂和昂贵,不适合日常使用。其他解决方案的目标是在 AR 头戴式显示器(HMD)上进行移动和易于设置的远程会议。它们直接将传统视频会议移植到 AR-HMD 平台上,或使用虚拟化身来代表远程参与者。不过,它们只能支持高保真或高水平的共同在场。此外,HMD 有限的视场(FoV)可能会进一步降低用户的沉浸式体验。为了在逼真度和共同在场之间取得平衡,我们探索在 AR 远程会议中使用真人大小的二维视频化身(简称视频化身)。具体来说,考虑到 FoV 对用户近距离感知的潜在影响,我们首先开展了一项试点研究,探索在小团体 AR 会话中以本地用户为中心的视频化身最佳放置位置。有了摆放结果,我们就实现了基于视频化身的远程会议概念验证原型。我们对原型进行了用户评估,以验证其在平衡保真度和共同在场方面的有效性。根据试点研究的结果,我们通过在 VR 模拟环境中进行更多 FoV 条件的用户研究,进一步定量探索 FoV 大小对视频头像最佳位置的影响。我们回归了放置模型,作为计算确定视频头像在现有的各种 AR HMD 和未来更大视场角的远程会议应用中的放置位置的参考。
{"title":"Real-and-Present: Investigating the Use of Life-Size 2D Video Avatars in HMD-Based AR Teleconferencing.","authors":"Xuanyu Wang, Weizhan Zhang, Christian Sandor, Hongbo Fu","doi":"10.1109/TVCG.2024.3466554","DOIUrl":"10.1109/TVCG.2024.3466554","url":null,"abstract":"<p><p>Augmented Reality (AR) teleconferencing allows spatially distributed users to interact with each other in 3D through agents in their own physical environments. Existing methods leveraging volumetric capturing and reconstruction can provide a high-fidelity experience but are often too complex and expensive for everyday use. Other solutions target mobile and effortless-to-setup teleconferencing on AR Head Mounted Displays (HMD). They directly transplant the conventional video conferencing onto an AR-HMD platform or use avatars to represent remote participants. However, they can only support either a high fidelity or a high level of co-presence. Moreover, the limited Field of View (FoV) of HMDs could further degrade users' immersive experience. To achieve a balance between fidelity and co-presence, we explore using life-size 2D video-based avatars (video avatars for short) in AR teleconferencing. Specifically, with the potential effect of FoV on users' perception of proximity, we first conducted a pilot study to explore the local-user-centered optimal placement of video avatars in small-group AR conversations. With the placement results, we then implement a proof-of-concept prototype of video-avatar-based teleconferencing. We conduct user evaluations with our prototype to verify its effectiveness in balancing fidelity and co-presence. Following the indication in the pilot study, we further quantitatively explore the effect of FoV size on the video avatar's optimal placement through a user study involving more FoV conditions in a VR-simulated environment. We regress placement models to serve as references for computationally determining video avatar placements in such teleconferencing applications on various existing AR HMDs and future ones with bigger FoVs.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reducing Search Regions for Fast Detection of Exact Point-to-Point Geodesic Paths on Meshes. 减少搜索区域,快速检测网格上精确的点对点大地路径
Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3466242
Shuai Ma, Wencheng Wang, Fei Hou

Fast detection of exact point-to-point geodesic paths on meshes is still challenging with existing methods. For this, we present a method to reduce the region to be investigated on the mesh for efficiency. It is by our observation that a mesh and its simplified one are very alike so that the geodesic path between two defined points on the mesh and the geodesic path between their corresponding two points on the simplified mesh are very near to each other in the 3D Euclidean space. Thus, with the geodesic path on the simplified mesh, we can generate a region on the original mesh that contains the geodesic path on the mesh, called the search region, by which existing methods can reduce the search scope in detecting geodesic paths, and so obtaining acceleration. We demonstrate the rationale behind our proposed method. Experimental results show that we can promote existing methods well, e.g., the global exact method VTP (vertex-oriented triangle propagation) can be sped up by even over 200 times when handling large meshes. Our search region can also speed up path initialization using the Dijkstra algorithm to promote local methods, e.g., obtaining an acceleration of at least two times in our tests.

在现有方法中,快速检测网格上精确的点对点大地路径仍是一项挑战。为此,我们提出了一种减少网格上待研究区域以提高效率的方法。根据我们的观察,网格及其简化网格非常相似,因此网格上两个确定点之间的大地路径与简化网格上对应两点之间的大地路径在三维欧几里得空间中非常接近。因此,利用简化网格上的大地路径,我们可以在原始网格上生成一个包含网格上大地路径的区域,称为搜索区域,现有方法可以通过该区域缩小检测大地路径的搜索范围,从而获得加速度。我们展示了我们提出的方法背后的原理。实验结果表明,我们可以很好地促进现有方法的发展,例如,在处理大型网格时,全局精确方法 VTP(面向顶点的三角形传播)甚至可以加速 200 倍以上。我们的搜索区域还能加快使用 Dijkstra 算法进行路径初始化的速度,从而促进局部方法的发展,例如,在我们的测试中至少加速了两倍。
{"title":"Reducing Search Regions for Fast Detection of Exact Point-to-Point Geodesic Paths on Meshes.","authors":"Shuai Ma, Wencheng Wang, Fei Hou","doi":"10.1109/TVCG.2024.3466242","DOIUrl":"10.1109/TVCG.2024.3466242","url":null,"abstract":"<p><p>Fast detection of exact point-to-point geodesic paths on meshes is still challenging with existing methods. For this, we present a method to reduce the region to be investigated on the mesh for efficiency. It is by our observation that a mesh and its simplified one are very alike so that the geodesic path between two defined points on the mesh and the geodesic path between their corresponding two points on the simplified mesh are very near to each other in the 3D Euclidean space. Thus, with the geodesic path on the simplified mesh, we can generate a region on the original mesh that contains the geodesic path on the mesh, called the search region, by which existing methods can reduce the search scope in detecting geodesic paths, and so obtaining acceleration. We demonstrate the rationale behind our proposed method. Experimental results show that we can promote existing methods well, e.g., the global exact method VTP (vertex-oriented triangle propagation) can be sped up by even over 200 times when handling large meshes. Our search region can also speed up path initialization using the Dijkstra algorithm to promote local methods, e.g., obtaining an acceleration of at least two times in our tests.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1