首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
GraspDiff: Grasping Generation for Hand-Object Interaction With Multimodal Guided Diffusion. GraspDiff:利用多模态引导扩散生成手与物体交互的抓取效果
Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3466190
Binghui Zuo, Zimeng Zhao, Wenqian Sun, Xiaohan Yuan, Zhipeng Yu, Yangang Wang

Grasping generation holds significant importance in both robotics and AI-generated content. While pure network paradigms based on VAEs or GANs ensure diversity in outcomes, they often fall short of achieving plausibility. Additionally, although those two-step paradigms that first predict contact and then optimize distance yield plausible results, they are always known to be time-consuming. This paper introduces a novel paradigm powered by DDPM, accommodating diverse modalities with varying interaction granularities as its generating conditions, including 3D object, contact affordance, and image content. Our key idea is that the iterative steps inherent to diffusion models can supplant the iterative optimization routines in existing optimization methods, thereby endowing the generated results from our method with both diversity and plausibility. Using the same training data, our paradigm achieves superior generation performance and competitive generation speed compared to optimization-based paradigms. Extensive experiments on both in-domain and out-of-domain objects demonstrate that our method receives significant improvement over the SOTA method. We will release the code for research purposes.

抓取生成对于机器人和人工智能生成的内容都具有重要意义。虽然基于 VAE 或 GAN 的纯网络范式可确保结果的多样性,但它们往往无法实现可信度。此外,虽然那些先预测接触然后优化距离的两步范式能产生可信的结果,但众所周知它们总是非常耗时。本文介绍了一种由 DDPM 驱动的新型范式,该范式的生成条件包括三维物体、接触能力和图像内容等,可适应不同的交互模式和不同的交互粒度。我们的主要想法是,扩散模型固有的迭代步骤可以取代现有优化方法中的迭代优化例程,从而使我们的方法产生的结果具有多样性和可信性。在使用相同训练数据的情况下,与基于优化的范式相比,我们的范式具有更优越的生成性能和更有竞争力的生成速度。对域内和域外对象的广泛实验表明,我们的方法比 SOTA 方法有显著改进。我们将为研究目的发布代码。
{"title":"GraspDiff: Grasping Generation for Hand-Object Interaction With Multimodal Guided Diffusion.","authors":"Binghui Zuo, Zimeng Zhao, Wenqian Sun, Xiaohan Yuan, Zhipeng Yu, Yangang Wang","doi":"10.1109/TVCG.2024.3466190","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3466190","url":null,"abstract":"<p><p>Grasping generation holds significant importance in both robotics and AI-generated content. While pure network paradigms based on VAEs or GANs ensure diversity in outcomes, they often fall short of achieving plausibility. Additionally, although those two-step paradigms that first predict contact and then optimize distance yield plausible results, they are always known to be time-consuming. This paper introduces a novel paradigm powered by DDPM, accommodating diverse modalities with varying interaction granularities as its generating conditions, including 3D object, contact affordance, and image content. Our key idea is that the iterative steps inherent to diffusion models can supplant the iterative optimization routines in existing optimization methods, thereby endowing the generated results from our method with both diversity and plausibility. Using the same training data, our paradigm achieves superior generation performance and competitive generation speed compared to optimization-based paradigms. Extensive experiments on both in-domain and out-of-domain objects demonstrate that our method receives significant improvement over the SOTA method. We will release the code for research purposes.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study. 代达罗斯数据医疗制造中粒子的探索、知识外部化和标记--一项设计研究。
Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3456329
Alexander Wyss, Gabriela Morgenshtern, Amanda Hirsch-Husler, Jurgen Bernard

In medical diagnostics of both early disease detection and routine patient care, particle-based contamination of in-vitro diagnostics consumables poses a significant threat to patients. Objective data-driven decision-making on the severity of contamination is key for reducing patient risk, while saving time and cost in quality assessment. Our collaborators introduced us to their quality control process, including particle data acquisition through image recognition, feature extraction, and attributes reflecting the production context of particles. Shortcomings in the current process are limitations in exploring thousands of images, data-driven decision making, and ineffective knowledge externalization. Following the design study methodology, our contributions are a characterization of the problem space and requirements, the development and validation of DaedalusData, a comprehensive discussion of our study's learnings, and a generalizable framework for knowledge externalization. DaedalusData is a visual analytics system that enables domain experts to explore particle contamination patterns, label particles in label alphabets, and externalize knowledge through semi-supervised label-informed data projections. The results of our case study and user study show high usability of DaedalusData and its efficient support of experts in generating comprehensive overviews of thousands of particles, labeling of large quantities of particles, and externalizing knowledge to augment the dataset further. Reflecting on our approach, we discuss insights on dataset augmentation via human knowledge externalization, and on the scalability and trade-offs that come with the adoption of this approach in practice.

在早期疾病检测和常规病人护理的医疗诊断中,体外诊断耗材的微粒污染对病人构成重大威胁。对污染严重程度进行客观的数据驱动决策是降低患者风险的关键,同时还能节省质量评估的时间和成本。我们的合作者向我们介绍了他们的质量控制流程,包括通过图像识别获取颗粒数据、特征提取和反映颗粒生产环境的属性。当前流程的不足之处在于,在探索成千上万的图像、数据驱动决策和知识外部化方面存在局限性。按照设计研究的方法,我们的贡献包括对问题空间和需求的描述、代达罗斯数据(DaedalusData)的开发和验证、对研究心得的全面讨论以及知识外部化的通用框架。代达罗斯数据是一个可视化分析系统,它能让领域专家探索粒子污染模式,用标签字母表为粒子贴标签,并通过半监督标签信息数据投影将知识外部化。我们的案例研究和用户研究结果表明,DaedalusData 具有很高的可用性,能有效地支持专家生成数千个颗粒的综合概览、标记大量颗粒以及将知识外部化以进一步扩充数据集。在反思我们的方法时,我们讨论了通过人类知识外部化扩充数据集的见解,以及在实践中采用这种方法时的可扩展性和权衡问题。
{"title":"DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing - A Design Study.","authors":"Alexander Wyss, Gabriela Morgenshtern, Amanda Hirsch-Husler, Jurgen Bernard","doi":"10.1109/TVCG.2024.3456329","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456329","url":null,"abstract":"<p><p>In medical diagnostics of both early disease detection and routine patient care, particle-based contamination of in-vitro diagnostics consumables poses a significant threat to patients. Objective data-driven decision-making on the severity of contamination is key for reducing patient risk, while saving time and cost in quality assessment. Our collaborators introduced us to their quality control process, including particle data acquisition through image recognition, feature extraction, and attributes reflecting the production context of particles. Shortcomings in the current process are limitations in exploring thousands of images, data-driven decision making, and ineffective knowledge externalization. Following the design study methodology, our contributions are a characterization of the problem space and requirements, the development and validation of DaedalusData, a comprehensive discussion of our study's learnings, and a generalizable framework for knowledge externalization. DaedalusData is a visual analytics system that enables domain experts to explore particle contamination patterns, label particles in label alphabets, and externalize knowledge through semi-supervised label-informed data projections. The results of our case study and user study show high usability of DaedalusData and its efficient support of experts in generating comprehensive overviews of thousands of particles, labeling of large quantities of particles, and externalizing knowledge to augment the dataset further. Reflecting on our approach, we discuss insights on dataset augmentation via human knowledge externalization, and on the scalability and trade-offs that come with the adoption of this approach in practice.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-and-Present: Investigating the Use of Life-Size 2D Video Avatars in HMD-Based AR Teleconferencing. 真实与呈现:研究在基于 HMD 的 AR 电话会议中使用真人大小的 2D 视频头像。
Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3466554
Xuanyu Wang, Weizhan Zhang, Christian Sandor, Hongbo Fu

Augmented Reality (AR) teleconferencing allows spatially distributed users to interact with each other in 3D through agents in their own physical environments. Existing methods leveraging volumetric capturing and reconstruction can provide a high-fidelity experience but are often too complex and expensive for everyday use. Other solutions target mobile and effortless-to-setup teleconferencing on AR Head Mounted Displays (HMD). They directly transplant the conventional video conferencing onto an AR-HMD platform or use avatars to represent remote participants. However, they can only support either a high fidelity or a high level of co-presence. Moreover, the limited Field of View (FoV) of HMDs could further degrade users' immersive experience. To achieve a balance between fidelity and co-presence, we explore using life-size 2D video-based avatars (video avatars for short) in AR teleconferencing. Specifically, with the potential effect of FoV on users' perception of proximity, we first conducted a pilot study to explore the local-user-centered optimal placement of video avatars in small-group AR conversations. With the placement results, we then implement a proof-of-concept prototype of video-avatar-based teleconferencing. We conduct user evaluations with our prototype to verify its effectiveness in balancing fidelity and co-presence. Following the indication in the pilot study, we further quantitatively explore the effect of FoV size on the video avatar's optimal placement through a user study involving more FoV conditions in a VR-simulated environment. We regress placement models to serve as references for computationally determining video avatar placements in such teleconferencing applications on various existing AR HMDs and future ones with bigger FoVs.

增强现实(AR)远程会议允许空间分散的用户在自己的物理环境中通过代理进行三维互动。利用体积捕捉和重建的现有方法可以提供高保真体验,但往往过于复杂和昂贵,不适合日常使用。其他解决方案的目标是在 AR 头戴式显示器(HMD)上进行移动和易于设置的远程会议。它们直接将传统视频会议移植到 AR-HMD 平台上,或使用虚拟化身来代表远程参与者。不过,它们只能支持高保真或高水平的共同在场。此外,HMD 有限的视场(FoV)可能会进一步降低用户的沉浸式体验。为了在逼真度和共同在场之间取得平衡,我们探索在 AR 远程会议中使用真人大小的二维视频化身(简称视频化身)。具体来说,考虑到 FoV 对用户近距离感知的潜在影响,我们首先开展了一项试点研究,探索在小团体 AR 会话中以本地用户为中心的视频化身最佳放置位置。有了摆放结果,我们就实现了基于视频化身的远程会议概念验证原型。我们对原型进行了用户评估,以验证其在平衡保真度和共同在场方面的有效性。根据试点研究的结果,我们通过在 VR 模拟环境中进行更多 FoV 条件的用户研究,进一步定量探索 FoV 大小对视频头像最佳位置的影响。我们回归了放置模型,作为计算确定视频头像在现有的各种 AR HMD 和未来更大视场角的远程会议应用中的放置位置的参考。
{"title":"Real-and-Present: Investigating the Use of Life-Size 2D Video Avatars in HMD-Based AR Teleconferencing.","authors":"Xuanyu Wang, Weizhan Zhang, Christian Sandor, Hongbo Fu","doi":"10.1109/TVCG.2024.3466554","DOIUrl":"10.1109/TVCG.2024.3466554","url":null,"abstract":"<p><p>Augmented Reality (AR) teleconferencing allows spatially distributed users to interact with each other in 3D through agents in their own physical environments. Existing methods leveraging volumetric capturing and reconstruction can provide a high-fidelity experience but are often too complex and expensive for everyday use. Other solutions target mobile and effortless-to-setup teleconferencing on AR Head Mounted Displays (HMD). They directly transplant the conventional video conferencing onto an AR-HMD platform or use avatars to represent remote participants. However, they can only support either a high fidelity or a high level of co-presence. Moreover, the limited Field of View (FoV) of HMDs could further degrade users' immersive experience. To achieve a balance between fidelity and co-presence, we explore using life-size 2D video-based avatars (video avatars for short) in AR teleconferencing. Specifically, with the potential effect of FoV on users' perception of proximity, we first conducted a pilot study to explore the local-user-centered optimal placement of video avatars in small-group AR conversations. With the placement results, we then implement a proof-of-concept prototype of video-avatar-based teleconferencing. We conduct user evaluations with our prototype to verify its effectiveness in balancing fidelity and co-presence. Following the indication in the pilot study, we further quantitatively explore the effect of FoV size on the video avatar's optimal placement through a user study involving more FoV conditions in a VR-simulated environment. We regress placement models to serve as references for computationally determining video avatar placements in such teleconferencing applications on various existing AR HMDs and future ones with bigger FoVs.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reducing Search Regions for Fast Detection of Exact Point-to-Point Geodesic Paths on Meshes. 减少搜索区域,快速检测网格上精确的点对点大地路径
Pub Date : 2024-09-23 DOI: 10.1109/TVCG.2024.3466242
Shuai Ma, Wencheng Wang, Fei Hou

Fast detection of exact point-to-point geodesic paths on meshes is still challenging with existing methods. For this, we present a method to reduce the region to be investigated on the mesh for efficiency. It is by our observation that a mesh and its simplified one are very alike so that the geodesic path between two defined points on the mesh and the geodesic path between their corresponding two points on the simplified mesh are very near to each other in the 3D Euclidean space. Thus, with the geodesic path on the simplified mesh, we can generate a region on the original mesh that contains the geodesic path on the mesh, called the search region, by which existing methods can reduce the search scope in detecting geodesic paths, and so obtaining acceleration. We demonstrate the rationale behind our proposed method. Experimental results show that we can promote existing methods well, e.g., the global exact method VTP (vertex-oriented triangle propagation) can be sped up by even over 200 times when handling large meshes. Our search region can also speed up path initialization using the Dijkstra algorithm to promote local methods, e.g., obtaining an acceleration of at least two times in our tests.

在现有方法中,快速检测网格上精确的点对点大地路径仍是一项挑战。为此,我们提出了一种减少网格上待研究区域以提高效率的方法。根据我们的观察,网格及其简化网格非常相似,因此网格上两个确定点之间的大地路径与简化网格上对应两点之间的大地路径在三维欧几里得空间中非常接近。因此,利用简化网格上的大地路径,我们可以在原始网格上生成一个包含网格上大地路径的区域,称为搜索区域,现有方法可以通过该区域缩小检测大地路径的搜索范围,从而获得加速度。我们展示了我们提出的方法背后的原理。实验结果表明,我们可以很好地促进现有方法的发展,例如,在处理大型网格时,全局精确方法 VTP(面向顶点的三角形传播)甚至可以加速 200 倍以上。我们的搜索区域还能加快使用 Dijkstra 算法进行路径初始化的速度,从而促进局部方法的发展,例如,在我们的测试中至少加速了两倍。
{"title":"Reducing Search Regions for Fast Detection of Exact Point-to-Point Geodesic Paths on Meshes.","authors":"Shuai Ma, Wencheng Wang, Fei Hou","doi":"10.1109/TVCG.2024.3466242","DOIUrl":"10.1109/TVCG.2024.3466242","url":null,"abstract":"<p><p>Fast detection of exact point-to-point geodesic paths on meshes is still challenging with existing methods. For this, we present a method to reduce the region to be investigated on the mesh for efficiency. It is by our observation that a mesh and its simplified one are very alike so that the geodesic path between two defined points on the mesh and the geodesic path between their corresponding two points on the simplified mesh are very near to each other in the 3D Euclidean space. Thus, with the geodesic path on the simplified mesh, we can generate a region on the original mesh that contains the geodesic path on the mesh, called the search region, by which existing methods can reduce the search scope in detecting geodesic paths, and so obtaining acceleration. We demonstrate the rationale behind our proposed method. Experimental results show that we can promote existing methods well, e.g., the global exact method VTP (vertex-oriented triangle propagation) can be sped up by even over 200 times when handling large meshes. Our search region can also speed up path initialization using the Dijkstra algorithm to promote local methods, e.g., obtaining an acceleration of at least two times in our tests.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142309491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Enhancing Low Vision Usability of Data Charts on Smartphones. 提高智能手机数据图表的低视力可用性。
Pub Date : 2024-09-20 DOI: 10.1109/TVCG.2024.3456348
Yash Prakash, Pathan Aseef Khan, Akshay Kolgar Nayak, Sampath Jayarathna, Hae-Na Lee, Vikas Ashok

The importance of data charts is self-evident, given their ability to express complex data in a simple format that facilitates quick and easy comparisons, analysis, and consumption. However, the inherent visual nature of the charts creates barriers for people with visual impairments to reap the associated benefts to the same extent as their sighted peers. While extant research has predominantly focused on understanding and addressing these barriers for blind screen reader users, the needs of low-vision screen magnifer users have been largely overlooked. In an interview study, almost all low-vision participants stated that it was challenging to interact with data charts on small screen devices such as smartphones and tablets, even though they could technically "see" the chart content. They ascribed these challenges mainly to the magnifcation-induced loss of visual context that connected data points with each other and also with chart annotations, e.g., axis values. In this paper, we present a method that addresses this problem by automatically transforming charts that are typically non-interactive images into personalizable interactive charts which allow selective viewing of desired data points and preserve visual context as much as possible under screen enlargement. We evaluated our method in a usability study with 26 low-vision participants, who all performed a set of representative chart-related tasks under different study conditions. In the study, we observed that our method signifcantly improved the usability of charts over both the status quo screen magnifer and a state-of-the-art space compaction-based solution.

数据图表能够以简单的格式表达复杂的数据,便于快速方便地进行比较、分析和消费,其重要性不言而喻。然而,图表固有的可视性给视力障碍者带来了障碍,使他们无法像视力正常的人一样获得相关的益处。现有的研究主要集中在了解和解决盲人屏幕阅读器用户的这些障碍上,而低视力屏幕放大镜用户的需求却在很大程度上被忽视了。在一项访谈研究中,几乎所有低视力用户都表示,在智能手机和平板电脑等小屏幕设备上与数据图表进行交互具有挑战性,尽管他们在技术上可以 "看到 "图表内容。他们将这些挑战主要归咎于放大导致的视觉上下文丢失,而这种上下文将数据点之间以及与图表注释(如坐标轴值)之间联系起来。在本文中,我们提出了一种解决这一问题的方法,即自动将通常为非交互式图像的图表转换为可个性化的交互式图表,这种图表允许选择性地查看所需的数据点,并在屏幕放大的情况下尽可能保留视觉上下文。我们在一项可用性研究中对我们的方法进行了评估,共有 26 名低视力者参加了这项研究,他们都在不同的研究条件下完成了一系列与图表相关的代表性任务。在这项研究中,我们观察到我们的方法显著提高了图表的可用性,超过了现状屏幕放大镜和基于空间压缩的最先进解决方案。
{"title":"Towards Enhancing Low Vision Usability of Data Charts on Smartphones.","authors":"Yash Prakash, Pathan Aseef Khan, Akshay Kolgar Nayak, Sampath Jayarathna, Hae-Na Lee, Vikas Ashok","doi":"10.1109/TVCG.2024.3456348","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456348","url":null,"abstract":"<p><p>The importance of data charts is self-evident, given their ability to express complex data in a simple format that facilitates quick and easy comparisons, analysis, and consumption. However, the inherent visual nature of the charts creates barriers for people with visual impairments to reap the associated benefts to the same extent as their sighted peers. While extant research has predominantly focused on understanding and addressing these barriers for blind screen reader users, the needs of low-vision screen magnifer users have been largely overlooked. In an interview study, almost all low-vision participants stated that it was challenging to interact with data charts on small screen devices such as smartphones and tablets, even though they could technically \"see\" the chart content. They ascribed these challenges mainly to the magnifcation-induced loss of visual context that connected data points with each other and also with chart annotations, e.g., axis values. In this paper, we present a method that addresses this problem by automatically transforming charts that are typically non-interactive images into personalizable interactive charts which allow selective viewing of desired data points and preserve visual context as much as possible under screen enlargement. We evaluated our method in a usability study with 26 low-vision participants, who all performed a set of representative chart-related tasks under different study conditions. In the study, we observed that our method signifcantly improved the usability of charts over both the status quo screen magnifer and a state-of-the-art space compaction-based solution.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration. 可视化地图集:通过数据、可视化和叙述来解释和探索复杂的主题。
Pub Date : 2024-09-20 DOI: 10.1109/TVCG.2024.3456311
Jinrui Wang, Xinhuan Shu, Benjamin Bach, Uta Hinrichs

This paper defines, analyzes, and discusses the emerging genre of visualization atlases. We currently witness an increase in web-based, data-driven initiatives that call themselves "atlases" while explaining complex, contemporary issues through data and visualizations: climate change, sustainability, AI, or cultural discoveries. To understand this emerging genre and inform their design, study, and authoring support, we conducted a systematic analysis of 33 visualization atlases and semi-structured interviews with eight visualization atlas creators. Based on our results, we contribute (1) a definition of a visualization atlas as a compendium of (web) pages aimed at explaining and supporting exploration of data about a dedicated topic through data, visualizations and narration. (2) a set of design patterns of 8 design dimensions, (3) insights into the atlas creation from interviews and (4) the definition of 5 visualization atlas genres. We found that visualization atlases are unique in the way they combine i) exploratory visualization, ii) narrative elements from data-driven storytelling and iii) structured navigation mechanisms. They target a wide range of audiences with different levels of domain knowledge, acting as tools for study, communication, and discovery. We conclude with a discussion of current design practices and emerging questions around the ethics and potential real-world impact of visualization atlases, aimed to inform the design and study of visualization atlases.

本文对可视化地图集这一新兴类型进行了定义、分析和讨论。目前,我们看到越来越多基于网络的数据驱动型计划自称为 "地图集",同时通过数据和可视化来解释复杂的当代问题:气候变化、可持续发展、人工智能或文化发现。为了了解这一新兴类型,并为其设计、研究和作者支持提供信息,我们对 33 幅可视化地图集进行了系统分析,并对 8 位可视化地图集创作者进行了半结构化访谈。根据分析结果,我们提出了以下几点:(1) 将可视化地图集定义为(网络)页面汇编,旨在通过数据、可视化和叙述来解释和支持对特定主题数据的探索。(2) 一套包含 8 个设计维度的设计模式,(3) 通过访谈对图集制作的见解,(4) 5 种可视化图集流派的定义。我们发现,可视化地图集的独特之处在于它们结合了 i) 探索性可视化;ii) 数据驱动的故事叙述元素;iii) 结构化导航机制。它们面向具有不同领域知识水平的广泛受众,是研究、交流和发现的工具。最后,我们将围绕可视化地图集的伦理和潜在现实影响,讨论当前的设计实践和新出现的问题,旨在为可视化地图集的设计和研究提供参考。
{"title":"Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration.","authors":"Jinrui Wang, Xinhuan Shu, Benjamin Bach, Uta Hinrichs","doi":"10.1109/TVCG.2024.3456311","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456311","url":null,"abstract":"<p><p>This paper defines, analyzes, and discusses the emerging genre of visualization atlases. We currently witness an increase in web-based, data-driven initiatives that call themselves \"atlases\" while explaining complex, contemporary issues through data and visualizations: climate change, sustainability, AI, or cultural discoveries. To understand this emerging genre and inform their design, study, and authoring support, we conducted a systematic analysis of 33 visualization atlases and semi-structured interviews with eight visualization atlas creators. Based on our results, we contribute (1) a definition of a visualization atlas as a compendium of (web) pages aimed at explaining and supporting exploration of data about a dedicated topic through data, visualizations and narration. (2) a set of design patterns of 8 design dimensions, (3) insights into the atlas creation from interviews and (4) the definition of 5 visualization atlas genres. We found that visualization atlases are unique in the way they combine i) exploratory visualization, ii) narrative elements from data-driven storytelling and iii) structured navigation mechanisms. They target a wide range of audiences with different levels of domain knowledge, acting as tools for study, communication, and discovery. We conclude with a discussion of current design practices and emerging questions around the ethics and potential real-world impact of visualization atlases, aimed to inform the design and study of visualization atlases.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalization of CNNs on Relational Reasoning With Bar Charts. 利用条形图进行关系推理的 CNN 通用化。
Pub Date : 2024-09-19 DOI: 10.1109/TVCG.2024.3463800
Zhenxing Cui, Lu Chen, Yunhai Wang, Daniel Haehn, Yong Wang, Hanspeter Pfister

This paper presents a systematic study of the generalization of convolutional neural networks (CNNs) and humans on relational reasoning tasks with bar charts. We first revisit previous experiments on graphical perception and update the benchmark performance of CNNs. We then test the generalization performance of CNNs on a classic relational reasoning task: estimating bar length ratios in a bar chart, by progressively perturbing the standard visualizations. We further conduct a user study to compare the performance of CNNs and humans. Our results show that CNNs outperform humans only when the training and test data have the same visual encodings. Otherwise, they may perform worse. We also find that CNNs are sensitive to perturbations in various visual encodings, regardless of their relevance to the target bars. Yet, humans are mainly influenced by bar lengths. Our study suggests that robust relational reasoning with visualizations is challenging for CNNs. Improving CNNs' generalization performance may require training them to better recognize task-related visual properties.

本文系统研究了卷积神经网络(CNN)和人类在条形图关系推理任务中的泛化能力。我们首先重温了以前的图形感知实验,并更新了 CNN 的基准性能。然后,我们在一个经典的关系推理任务上测试了 CNN 的泛化性能:通过逐步扰动标准可视化,估计条形图中的条形长度比。我们还进行了一项用户研究,以比较 CNN 和人类的性能。我们的结果表明,只有当训练数据和测试数据具有相同的视觉编码时,CNN 的表现才会优于人类。否则,它们的表现可能会更差。我们还发现,CNN 对各种视觉编码的扰动非常敏感,无论这些扰动与目标条形图是否相关。然而,人类主要受到条形图长度的影响。我们的研究表明,利用可视化进行稳健的关系推理对 CNN 来说具有挑战性。要提高 CNN 的泛化性能,可能需要对其进行训练,使其更好地识别与任务相关的视觉属性。
{"title":"Generalization of CNNs on Relational Reasoning With Bar Charts.","authors":"Zhenxing Cui, Lu Chen, Yunhai Wang, Daniel Haehn, Yong Wang, Hanspeter Pfister","doi":"10.1109/TVCG.2024.3463800","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3463800","url":null,"abstract":"<p><p>This paper presents a systematic study of the generalization of convolutional neural networks (CNNs) and humans on relational reasoning tasks with bar charts. We first revisit previous experiments on graphical perception and update the benchmark performance of CNNs. We then test the generalization performance of CNNs on a classic relational reasoning task: estimating bar length ratios in a bar chart, by progressively perturbing the standard visualizations. We further conduct a user study to compare the performance of CNNs and humans. Our results show that CNNs outperform humans only when the training and test data have the same visual encodings. Otherwise, they may perform worse. We also find that CNNs are sensitive to perturbations in various visual encodings, regardless of their relevance to the target bars. Yet, humans are mainly influenced by bar lengths. Our study suggests that robust relational reasoning with visualizations is challenging for CNNs. Improving CNNs' generalization performance may require training them to better recognize task-related visual properties.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Complementary Filter for Hybrid Inside-Out Outside-In HMD Tracking With Smooth Transitions. 自适应互补滤波器用于具有平滑过渡功能的内外混合 HMD 跟踪。
Pub Date : 2024-09-19 DOI: 10.1109/TVCG.2024.3464738
Riccardo Monica, Dario Lodi Rizzini, Jacopo Aleotti

Head-mounted displays (HMDs) in room-scale virtual reality are usually tracked using inside-out visual SLAM algorithms. Alternatively, to track the motion of the HMD with respect to a fixed real-world reference frame, an outside-in instrumentation like a motion capture system can be adopted. However, outside-in tracking systems may temporarily lose tracking as they suffer by occlusion and blind spots. A possible solution is to adopt a hybrid approach where the inside-out tracker of the HMD is augmented with an outside-in sensing system. On the other hand, when the tracking signal of the outside-in system is recovered after a loss of tracking the transition from inside-out tracking to hybrid tracking may generate a discontinuity, i.e a sudden change of the virtual viewpoint, that can be uncomfortable for the user. Therefore, hybrid tracking solutions for HMDs require advanced sensor fusion algorithms to obtain a smooth transition. This work proposes a method for hybrid tracking of a HMD with smooth transitions based on an adaptive complementary filter. The proposed approach can be configured with several parameters that determine a trade-off between user experience and tracking error. A user study was carried out in a room-scale virtual reality environment, where users carried out two different tasks while multiple signal tracking losses of the outside-in sensor system occurred. The results show that the proposed approach improves user experience compared to a standard Extended Kalman Filter, and that tracking error is lower compared to a state-of-the-art complementary filter when configured for the same quality of user experience.

室内虚拟现实中的头戴式显示器(HMD)通常使用内向外视觉 SLAM 算法进行跟踪。另外,为了跟踪头戴式显示器相对于固定现实世界参考帧的运动,也可以采用运动捕捉系统等外入式仪器。然而,外入式跟踪系统可能会暂时失去跟踪能力,因为它们会受到遮挡和盲点的影响。一种可行的解决方案是采用混合方法,即在 HMD 的由内向外跟踪器上增加一个由外向内的传感系统。另一方面,当外入式系统的跟踪信号在失去跟踪后恢复时,从内向外跟踪到混合跟踪的过渡可能会产生不连续性,即虚拟视点的突然变化,这会让用户感到不舒服。因此,用于 HMD 的混合跟踪解决方案需要先进的传感器融合算法来实现平稳过渡。本作品提出了一种基于自适应互补滤波器的平滑过渡 HMD 混合跟踪方法。所提出的方法可配置多个参数,这些参数决定了用户体验与跟踪误差之间的权衡。在房间规模的虚拟现实环境中进行了一项用户研究,用户在执行两项不同任务的同时,外入式传感器系统出现了多个信号跟踪损失。结果表明,与标准的扩展卡尔曼滤波器相比,所提出的方法改善了用户体验,而且在配置相同的用户体验质量时,与最先进的互补滤波器相比,跟踪误差更小。
{"title":"Adaptive Complementary Filter for Hybrid Inside-Out Outside-In HMD Tracking With Smooth Transitions.","authors":"Riccardo Monica, Dario Lodi Rizzini, Jacopo Aleotti","doi":"10.1109/TVCG.2024.3464738","DOIUrl":"10.1109/TVCG.2024.3464738","url":null,"abstract":"<p><p>Head-mounted displays (HMDs) in room-scale virtual reality are usually tracked using inside-out visual SLAM algorithms. Alternatively, to track the motion of the HMD with respect to a fixed real-world reference frame, an outside-in instrumentation like a motion capture system can be adopted. However, outside-in tracking systems may temporarily lose tracking as they suffer by occlusion and blind spots. A possible solution is to adopt a hybrid approach where the inside-out tracker of the HMD is augmented with an outside-in sensing system. On the other hand, when the tracking signal of the outside-in system is recovered after a loss of tracking the transition from inside-out tracking to hybrid tracking may generate a discontinuity, i.e a sudden change of the virtual viewpoint, that can be uncomfortable for the user. Therefore, hybrid tracking solutions for HMDs require advanced sensor fusion algorithms to obtain a smooth transition. This work proposes a method for hybrid tracking of a HMD with smooth transitions based on an adaptive complementary filter. The proposed approach can be configured with several parameters that determine a trade-off between user experience and tracking error. A user study was carried out in a room-scale virtual reality environment, where users carried out two different tasks while multiple signal tracking losses of the outside-in sensor system occurred. The results show that the proposed approach improves user experience compared to a standard Extended Kalman Filter, and that tracking error is lower compared to a state-of-the-art complementary filter when configured for the same quality of user experience.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rapid and Precise Topological Comparison with Merge Tree Neural Networks. 利用合并树神经网络进行快速精确的拓扑比较。
Pub Date : 2024-09-19 DOI: 10.1109/TVCG.2024.3456395
Yu Qin, Brittany Terese Fasy, Carola Wenk, Brian Summa

Merge trees are a valuable tool in the scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the Merge Tree Neural Network (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how to train graph neural networks, which emerged as effective encoders for graphs, in order to produce embeddings of merge trees in vector spaces for efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model's generalizability across various datasets. Our experimental analysis demonstrates our approach's superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100× on the benchmark datasets while maintaining an error rate below 0.1%.

合并树是标量领域科学可视化的重要工具;然而,目前的合并树比较方法计算成本高昂,这主要是由于树节点之间需要进行穷举匹配。为了应对这一挑战,我们引入了合并树神经网络(MTNN),这是一种专为合并树比较设计的学习型神经网络模型。MTNN 可以实现快速、高质量的相似性计算。我们首先演示了如何训练图神经网络(作为图的有效编码器出现),以便在向量空间中生成合并树的嵌入,从而实现高效的相似性比较。接下来,我们建立了新颖的 MTNN 模型,通过将树和节点嵌入与新的拓扑关注机制相结合,进一步改进了相似性比较。我们在不同领域的真实数据上演示了模型的有效性,并检验了模型在不同数据集上的通用性。我们的实验分析证明了我们的方法在准确性和效率方面的优势。特别是,在基准数据集上,我们的速度比先前的先进水平提高了 100 倍以上,而错误率却保持在 0.1% 以下。
{"title":"Rapid and Precise Topological Comparison with Merge Tree Neural Networks.","authors":"Yu Qin, Brittany Terese Fasy, Carola Wenk, Brian Summa","doi":"10.1109/TVCG.2024.3456395","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3456395","url":null,"abstract":"<p><p>Merge trees are a valuable tool in the scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the Merge Tree Neural Network (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how to train graph neural networks, which emerged as effective encoders for graphs, in order to produce embeddings of merge trees in vector spaces for efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model's generalizability across various datasets. Our experimental analysis demonstrates our approach's superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100× on the benchmark datasets while maintaining an error rate below 0.1%.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gesture2Text: A Generalizable Decoder for Word-Gesture Keyboards in XR Through Trajectory Coarse Discretization and Pre-Training Gesture2Text:通过轨迹粗离散化和预训练为 XR 中的文字手势键盘设计通用解码器。
Pub Date : 2024-09-16 DOI: 10.1109/TVCG.2024.3456198
Junxiao Shen;Khadija Khaldi;Enmin Zhou;Hemant Bhaskar Surale;Amy Karlson
Text entry with word-gesture keyboards (WGK) is emerging as a popular method and becoming a key interaction for Extended Reality (XR). However, the diversity of interaction modes, keyboard sizes, and visual feedback in these environments introduces divergent word-gesture trajectory data patterns, thus leading to complexity in decoding trajectories into text. Template-matching decoding methods, such as SHARK2 [32], are commonly used for these WGK systems because they are easy to implement and configure. However, these methods are susceptible to decoding inaccuracies for noisy trajectories. While conventional neural-network-based decoders (neural decoders) trained on word-gesture trajectory data have been proposed to improve accuracy, they have their own limitations: they require extensive data for training and deep-learning expertise for implementation. To address these challenges, we propose a novel solution that combines ease of implementation with high decoding accuracy: a generalizable neural decoder enabled by pre-training on large-scale coarsely discretized word-gesture trajectories. This approach produces a ready-to-use WGK decoder that is generalizable across mid-air and on-surface WGK systems in augmented reality (AR) and virtual reality (VR), which is evident by a robust average Top-4 accuracy of 90.4% on four diverse datasets. It significantly outperforms SHARK2 with a 37.2% enhancement and surpasses the conventional neural decoder by 7.4%. Moreover, the Pre-trained Neural Decoder's size is only 4 MB after quantization, without sacrificing accuracy, and it can operate in real-time, executing in just 97 milliseconds on Quest 3.
使用文字手势键盘(WGK)输入文本正在成为一种流行的方法,并成为扩展现实(XR)的一种关键交互方式。然而,在这些环境中,交互模式、键盘尺寸和视觉反馈的多样性带来了不同的文字手势轨迹数据模式,从而导致将轨迹解码为文本的复杂性。模板匹配解码方法(如 SHARK2 [32])通常用于这些 WGK 系统,因为它们易于实现和配置。然而,这些方法在解码噪声轨迹时容易出现误差。虽然有人提出了基于神经网络的传统解码器(神经解码器)来提高准确性,但它们也有自身的局限性:它们需要大量数据进行训练,并需要深度学习的专业知识来实现。为了应对这些挑战,我们提出了一种新颖的解决方案,该方案兼具易实施性和高解码准确性:通过在大规模粗离散词句轨迹上进行预训练,实现可通用的神经解码器。这种方法产生了一种即用型 WGK 解码器,可用于增强现实(AR)和虚拟现实(VR)中的空中和地面 WGK 系统,在四个不同的数据集上,Top-4 平均准确率高达 90.4%。它明显优于 SHARK2,提高了 37.2%,比传统神经解码器高出 7.4%。此外,预训练神经解码器在量化后的大小仅为 4 MB,且不影响准确性,而且可以实时运行,在 Quest 3 上的执行时间仅为 97 毫秒。
{"title":"Gesture2Text: A Generalizable Decoder for Word-Gesture Keyboards in XR Through Trajectory Coarse Discretization and Pre-Training","authors":"Junxiao Shen;Khadija Khaldi;Enmin Zhou;Hemant Bhaskar Surale;Amy Karlson","doi":"10.1109/TVCG.2024.3456198","DOIUrl":"10.1109/TVCG.2024.3456198","url":null,"abstract":"Text entry with word-gesture keyboards (WGK) is emerging as a popular method and becoming a key interaction for Extended Reality (XR). However, the diversity of interaction modes, keyboard sizes, and visual feedback in these environments introduces divergent word-gesture trajectory data patterns, thus leading to complexity in decoding trajectories into text. Template-matching decoding methods, such as SHARK2 [32], are commonly used for these WGK systems because they are easy to implement and configure. However, these methods are susceptible to decoding inaccuracies for noisy trajectories. While conventional neural-network-based decoders (neural decoders) trained on word-gesture trajectory data have been proposed to improve accuracy, they have their own limitations: they require extensive data for training and deep-learning expertise for implementation. To address these challenges, we propose a novel solution that combines ease of implementation with high decoding accuracy: a generalizable neural decoder enabled by pre-training on large-scale coarsely discretized word-gesture trajectories. This approach produces a ready-to-use WGK decoder that is generalizable across mid-air and on-surface WGK systems in augmented reality (AR) and virtual reality (VR), which is evident by a robust average Top-4 accuracy of 90.4% on four diverse datasets. It significantly outperforms SHARK2 with a 37.2% enhancement and surpasses the conventional neural decoder by 7.4%. Moreover, the Pre-trained Neural Decoder's size is only 4 MB after quantization, without sacrificing accuracy, and it can operate in real-time, executing in just 97 milliseconds on Quest 3.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"7118-7128"},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1