首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
High-Fidelity and High-Efficiency Talking Portrait Synthesis With Detail-Aware Neural Radiance Fields. 利用感知细节的神经辐射场实现高保真、高效的会话肖像合成。
Pub Date : 2024-10-31 DOI: 10.1109/TVCG.2024.3488960
Muyu Wang, Sanyuan Zhao, Xingping Dong, Jianbing Shen

In this paper, we propose a novel rendering framework based on neural radiance fields (NeRF) named HH-NeRF that can generate high-resolution audio-driven talking portrait videos with high fidelity and fast rendering. Specifically, our framework includes a detail-aware NeRF module and an efficient conditional super-resolution module. Firstly, a detail-aware NeRF is proposed to efficiently generate a high-fidelity low-resolution talking head, by using the encoded volume density estimation and audio-eye-aware color calculation. This module can capture natural eye blinks and high-frequency details, and maintain a similar rendering time as previous fast methods. Secondly, we present an efficient conditional super-resolution module on the dynamic scene to directly generate the high-resolution portrait with our low-resolution head. Incorporated with the prior information, such as depth map and audio features, our new proposed efficient conditional super resolution module can adopt a lightweight network to efficiently generate realistic and distinct high-resolution videos. Extensive experiments demonstrate that our method can generate more distinct and fidelity talking portraits on high resolution (900 × 900) videos compared to state-of-the-art methods. Our code is available at https://github.com/muyuWang/HHNeRF.

在本文中,我们提出了一种基于神经辐射场(NeRF)的新型渲染框架,名为 HH-NeRF,它可以生成高保真、快速渲染的高分辨率音频驱动人像视频。具体来说,我们的框架包括一个细节感知 NeRF 模块和一个高效的条件超分辨率模块。首先,我们提出了一个细节感知 NeRF 模块,通过使用编码体积密度估算和音频眼睛感知颜色计算,高效生成高保真低分辨率的对话头像。该模块可以捕捉自然的眨眼和高频细节,并保持与以往快速方法相似的渲染时间。其次,我们在动态场景上提出了一个高效的条件超分辨率模块,利用低分辨率头部直接生成高分辨率人像。结合深度图和音频特征等先验信息,我们新提出的高效条件超分辨率模块可以采用轻量级网络,高效生成逼真、独特的高分辨率视频。广泛的实验证明,与最先进的方法相比,我们的方法能在高分辨率(900 × 900)视频上生成更清晰、更逼真的说话肖像。我们的代码见 https://github.com/muyuWang/HHNeRF。
{"title":"High-Fidelity and High-Efficiency Talking Portrait Synthesis With Detail-Aware Neural Radiance Fields.","authors":"Muyu Wang, Sanyuan Zhao, Xingping Dong, Jianbing Shen","doi":"10.1109/TVCG.2024.3488960","DOIUrl":"10.1109/TVCG.2024.3488960","url":null,"abstract":"<p><p>In this paper, we propose a novel rendering framework based on neural radiance fields (NeRF) named HH-NeRF that can generate high-resolution audio-driven talking portrait videos with high fidelity and fast rendering. Specifically, our framework includes a detail-aware NeRF module and an efficient conditional super-resolution module. Firstly, a detail-aware NeRF is proposed to efficiently generate a high-fidelity low-resolution talking head, by using the encoded volume density estimation and audio-eye-aware color calculation. This module can capture natural eye blinks and high-frequency details, and maintain a similar rendering time as previous fast methods. Secondly, we present an efficient conditional super-resolution module on the dynamic scene to directly generate the high-resolution portrait with our low-resolution head. Incorporated with the prior information, such as depth map and audio features, our new proposed efficient conditional super resolution module can adopt a lightweight network to efficiently generate realistic and distinct high-resolution videos. Extensive experiments demonstrate that our method can generate more distinct and fidelity talking portraits on high resolution (900 × 900) videos compared to state-of-the-art methods. Our code is available at https://github.com/muyuWang/HHNeRF.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142559855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SceneExplorer: An Interactive System for Expanding, Scheduling, and Organizing Transformable Layouts. SceneExplorer:用于扩展、调度和组织可变换布局的交互式系统
Pub Date : 2024-10-30 DOI: 10.1109/TVCG.2024.3488744
Shao-Kui Zhang, Jia-Hong Liu, Junkai Huang, Zi-Wei Chi, Hou Tam, Yong-Liang Yang, Song-Hai Zhang

Nowadays, 3D scenes are not merely static arrangements of objects. With the development of transformable modules, furniture objects can be translated, rotated, and even reshaped to achieve scenes with different functions (e.g., from a bedroom to a living room). Transformable domestic space, therefore, studies how a layout can change its function by reshaping and rearranging transformable modules, resulting in various transformable layouts. In practice, a rearrangement is dynamically conducted by reshaping/translating/rotating furniture objects with proper schedules, which can consume more time for designers than static scene design. Due to changes in objects' functions, potential transformable layouts may also be extensive, making it hard to explore desired layouts. We present a system for exploring transformable layouts. Given a single input scene consisting of transformable modules, our system first attempts to derive more layouts by reshaping and rearranging the modules. The derived scenes are organized into a graph-like hierarchy according to their functions, where edges represent functional evolutions (e.g., a living room can be reshaped to a bedroom), and nodes represent layouts that are dynamically transformable through translating/rotating/reshaping modules. The resulting hierarchy lets scene designers interactively explore possible scene variants and preview the animated rearrangement process. Experiments show that our system is efficient for generating transformable layouts, sensible for organizing functional hierarchies, and inspiring for providing ideas during interactions.

如今,三维场景已不仅仅是物体的静态排列。随着可变换模块的发展,家具物体可以平移、旋转甚至重塑,从而实现不同功能的场景(如从卧室到客厅)。因此,可变换的家居空间研究的是如何通过对可变换模块的重新塑造和重新排列来改变布局的功能,从而形成各种可变换的布局。在实践中,重新布局是通过重新塑造/转换/旋转家具对象,并按照适当的时间表动态进行的,这可能会比静态场景设计耗费设计师更多的时间。由于物体功能的变化,潜在的可变换布局也可能非常广泛,因此很难探索出理想的布局。我们提出了一种探索可变换布局的系统。给定一个由可变换模块组成的单一输入场景,我们的系统首先会尝试通过重塑和重新排列模块来推导出更多布局。导出的场景根据其功能被组织成一个类似图的层次结构,其中边代表功能演变(例如,客厅可以被重塑为卧室),节点代表通过平移/旋转/重塑模块进行动态变换的布局。由此产生的层次结构可以让场景设计者交互式地探索可能的场景变体,并预览动画重组过程。实验表明,我们的系统在生成可变换布局方面很有效,在组织功能层次结构方面很合理,在交互过程中提供创意方面也很有启发性。
{"title":"SceneExplorer: An Interactive System for Expanding, Scheduling, and Organizing Transformable Layouts.","authors":"Shao-Kui Zhang, Jia-Hong Liu, Junkai Huang, Zi-Wei Chi, Hou Tam, Yong-Liang Yang, Song-Hai Zhang","doi":"10.1109/TVCG.2024.3488744","DOIUrl":"10.1109/TVCG.2024.3488744","url":null,"abstract":"<p><p>Nowadays, 3D scenes are not merely static arrangements of objects. With the development of transformable modules, furniture objects can be translated, rotated, and even reshaped to achieve scenes with different functions (e.g., from a bedroom to a living room). Transformable domestic space, therefore, studies how a layout can change its function by reshaping and rearranging transformable modules, resulting in various transformable layouts. In practice, a rearrangement is dynamically conducted by reshaping/translating/rotating furniture objects with proper schedules, which can consume more time for designers than static scene design. Due to changes in objects' functions, potential transformable layouts may also be extensive, making it hard to explore desired layouts. We present a system for exploring transformable layouts. Given a single input scene consisting of transformable modules, our system first attempts to derive more layouts by reshaping and rearranging the modules. The derived scenes are organized into a graph-like hierarchy according to their functions, where edges represent functional evolutions (e.g., a living room can be reshaped to a bedroom), and nodes represent layouts that are dynamically transformable through translating/rotating/reshaping modules. The resulting hierarchy lets scene designers interactively explore possible scene variants and preview the animated rearrangement process. Experiments show that our system is efficient for generating transformable layouts, sensible for organizing functional hierarchies, and inspiring for providing ideas during interactions.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142549777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deciphering Explicit and Implicit Features for Reliable, Interpretable, and Actionable User Churn Prediction in Online Video Games. 解读显性和隐性特征,为在线视频游戏提供可靠、可解释和可操作的用户流失预测。
Pub Date : 2024-10-29 DOI: 10.1109/TVCG.2024.3487974
Xiyuan Wang, Laixin Xie, He Wang, Xingxing Xing, Wei Wan, Ziming Wu, Xiaojuan Ma, Quan Li

The burgeoning online video game industry has sparked intense competition among providers to both expand their user base and retain existing players, particularly within social interaction genres. To anticipate player churn, there is an increasing reliance on machine learning (ML) models that focus on social interaction dynamics. However, the prevalent opacity of most ML algorithms poses a significant hurdle to their acceptance among domain experts, who often view them as "black boxes". Despite the availability of eXplainable Artificial Intelligence (XAI) techniques capable of elucidating model decisions, their adoption in the gaming industry remains limited. This is primarily because non-technical domain experts, such as product managers and game designers, encounter substantial challenges in deciphering the "explicit" and "implicit" features embedded within computational models. This study proposes a reliable, interpretable, and actionable solution for predicting player churn by restructuring model inputs into explicit and implicit features. It explores how establishing a connection between explicit and implicit features can assist experts in understanding the underlying implicit features. Moreover, it emphasizes the necessity for XAI techniques that not only offer implementable interventions but also pinpoint the most crucial features for those interventions. Two case studies, including expert feedback and a within-subject user study, demonstrate the efficacy of our approach.

蓬勃发展的在线视频游戏行业引发了供应商之间的激烈竞争,他们既要扩大用户群,又要留住现有玩家,尤其是社交互动类型的游戏。为了预测玩家流失率,人们越来越依赖于关注社交互动动态的机器学习(ML)模型。然而,大多数 ML 算法普遍不透明,这严重阻碍了该领域专家对它们的接受,他们通常将这些算法视为 "黑盒子"。尽管可解释人工智能(XAI)技术能够阐明模型决策,但其在游戏行业的应用仍然有限。这主要是因为非技术领域专家(如产品经理和游戏设计师)在解读蕴含在计算模型中的 "显性 "和 "隐性 "特征时遇到了巨大挑战。本研究通过将模型输入重组为显性和隐性特征,为预测玩家流失率提出了一种可靠、可解释和可操作的解决方案。它探讨了在显性特征和隐性特征之间建立联系如何有助于专家理解潜在的隐性特征。此外,它还强调了 XAI 技术的必要性,这些技术不仅能提供可实施的干预措施,还能为这些干预措施指出最关键的特征。包括专家反馈和主体内用户研究在内的两个案例研究证明了我们方法的有效性。
{"title":"Deciphering Explicit and Implicit Features for Reliable, Interpretable, and Actionable User Churn Prediction in Online Video Games.","authors":"Xiyuan Wang, Laixin Xie, He Wang, Xingxing Xing, Wei Wan, Ziming Wu, Xiaojuan Ma, Quan Li","doi":"10.1109/TVCG.2024.3487974","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3487974","url":null,"abstract":"<p><p>The burgeoning online video game industry has sparked intense competition among providers to both expand their user base and retain existing players, particularly within social interaction genres. To anticipate player churn, there is an increasing reliance on machine learning (ML) models that focus on social interaction dynamics. However, the prevalent opacity of most ML algorithms poses a significant hurdle to their acceptance among domain experts, who often view them as \"black boxes\". Despite the availability of eXplainable Artificial Intelligence (XAI) techniques capable of elucidating model decisions, their adoption in the gaming industry remains limited. This is primarily because non-technical domain experts, such as product managers and game designers, encounter substantial challenges in deciphering the \"explicit\" and \"implicit\" features embedded within computational models. This study proposes a reliable, interpretable, and actionable solution for predicting player churn by restructuring model inputs into explicit and implicit features. It explores how establishing a connection between explicit and implicit features can assist experts in understanding the underlying implicit features. Moreover, it emphasizes the necessity for XAI techniques that not only offer implementable interventions but also pinpoint the most crucial features for those interventions. Two case studies, including expert feedback and a within-subject user study, demonstrate the efficacy of our approach.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142549776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GVVST: Image-Driven Style Extraction From Graph Visualizations for Visual Style Transfer. GVVST:从图形可视化中提取图像驱动的风格,实现可视化风格转移。
Pub Date : 2024-10-24 DOI: 10.1109/TVCG.2024.3485701
Sicheng Song, Yipeng Zhang, Yanna Lin, Huamin Qu, Changbo Wang, Chenhui Li

Incorporating automatic style extraction and transfer from existing well-designed graph visualizations can significantly alleviate the designer's workload. There are many types of graph visualizations. In this paper, our work focuses on node-link diagrams. We present a novel approach to streamline the design process of graph visualizations by automatically extracting visual styles from well-designed examples and applying them to other graphs. Our formative study identifies the key styles that designers consider when crafting visualizations, categorizing them into global and local styles. Leveraging deep learning techniques such as saliency detection models and multi-label classification models, we develop end-to-end pipelines for extracting both global and local styles. Global styles focus on aspects such as color scheme and layout, while local styles are concerned with the finer details of node and edge representations. Through a user study and evaluation experiment, we demonstrate the efficacy and time-saving benefits of our method, highlighting its potential to enhance the graph visualization design process.

从现有设计良好的图形可视化中自动提取和转移样式,可以大大减轻设计者的工作量。图形可视化有多种类型。在本文中,我们的工作重点是节点链接图。我们提出了一种简化图形可视化设计流程的新方法,即自动从设计良好的示例中提取视觉风格,并将其应用于其他图形。我们的形成性研究确定了设计师在设计可视化时所考虑的关键风格,并将其分为全局风格和局部风格。利用显著性检测模型和多标签分类模型等深度学习技术,我们开发了用于提取全局和局部风格的端到端管道。全局风格侧重于配色方案和布局等方面,而局部风格则关注节点和边缘表示的更细微之处。通过用户研究和评估实验,我们证明了我们的方法的功效和省时的优势,突出了它在增强图形可视化设计流程方面的潜力。
{"title":"GVVST: Image-Driven Style Extraction From Graph Visualizations for Visual Style Transfer.","authors":"Sicheng Song, Yipeng Zhang, Yanna Lin, Huamin Qu, Changbo Wang, Chenhui Li","doi":"10.1109/TVCG.2024.3485701","DOIUrl":"10.1109/TVCG.2024.3485701","url":null,"abstract":"<p><p>Incorporating automatic style extraction and transfer from existing well-designed graph visualizations can significantly alleviate the designer's workload. There are many types of graph visualizations. In this paper, our work focuses on node-link diagrams. We present a novel approach to streamline the design process of graph visualizations by automatically extracting visual styles from well-designed examples and applying them to other graphs. Our formative study identifies the key styles that designers consider when crafting visualizations, categorizing them into global and local styles. Leveraging deep learning techniques such as saliency detection models and multi-label classification models, we develop end-to-end pipelines for extracting both global and local styles. Global styles focus on aspects such as color scheme and layout, while local styles are concerned with the finer details of node and edge representations. Through a user study and evaluation experiment, we demonstrate the efficacy and time-saving benefits of our method, highlighting its potential to enhance the graph visualization design process.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142515306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Boundary-Guided Pseudo-Labeling for Weakly Supervised 3D Point Cloud Segmentation in Indoor Environments. 用于室内环境中弱监督三维点云分割的视觉边界引导伪标签技术
Pub Date : 2024-10-22 DOI: 10.1109/TVCG.2024.3484654
Zhuo Su, Lang Zhou, Yudi Tan, Boliang Guan, Fan Zhou

Accurate segmentation of 3D point clouds in indoor scenes remains a challenging task, often hindered by the labor-intensive nature of data annotation. While weakly supervised learning approaches have shown promise in leveraging partial annotations, they frequently struggle with imbalanced performance between foreground and background elements due to the complex structures and proximity of objects in indoor environments. To address this issue, we propose a novel foreground-aware label enhancement method utilizing visual boundary priors. Our approach projects 3D point clouds onto 2D planes and applies 2D image segmentation to generate pseudo-labels for foreground objects. These labels are subsequently back-projected into 3D space and used to train an initial segmentation model. We further refine this process by incorporating prior knowledge from projected images to filter the predicted labels, followed by model retraining. We introduce this technique as the Foreground Boundary Prior (FBP), a versatile, plug-and-play module designed to enhance various weakly supervised point cloud segmentation methods. We demonstrate the efficacy of our approach on the widely-used 2D-3D-Semantic dataset, employing both random-sample and bounding-box based weak labeling strategies. Our experimental results show significant improvements in segmentation performance across different architectural backbones, highlighting the method's effectiveness and portability.

对室内场景中的三维点云进行精确分割仍然是一项极具挑战性的任务,通常会受到数据注释这一劳动密集型工作的阻碍。虽然弱监督学习方法在利用部分注释方面已显示出前景,但由于室内环境中物体的复杂结构和邻近性,这些方法经常会在前景和背景元素之间的不平衡表现中挣扎。为了解决这个问题,我们提出了一种利用视觉边界先验的新型前景感知标签增强方法。我们的方法将三维点云投影到二维平面上,并应用二维图像分割为前景物体生成伪标签。这些标签随后被反向投射到三维空间,并用于训练初始分割模型。我们通过结合投影图像中的先验知识来过滤预测的标签,然后对模型进行再训练,从而进一步完善这一过程。我们将这种技术称为前景边界先验知识(FBP),它是一种通用的即插即用模块,旨在增强各种弱监督点云分割方法。我们在广泛使用的 2D-3D-Semantic 数据集上展示了这种方法的功效,并采用了随机样本和基于边界框的弱标记策略。实验结果表明,在不同的架构骨干上,我们的分割性能都有显著提高,突出了该方法的有效性和可移植性。
{"title":"Visual Boundary-Guided Pseudo-Labeling for Weakly Supervised 3D Point Cloud Segmentation in Indoor Environments.","authors":"Zhuo Su, Lang Zhou, Yudi Tan, Boliang Guan, Fan Zhou","doi":"10.1109/TVCG.2024.3484654","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3484654","url":null,"abstract":"<p><p>Accurate segmentation of 3D point clouds in indoor scenes remains a challenging task, often hindered by the labor-intensive nature of data annotation. While weakly supervised learning approaches have shown promise in leveraging partial annotations, they frequently struggle with imbalanced performance between foreground and background elements due to the complex structures and proximity of objects in indoor environments. To address this issue, we propose a novel foreground-aware label enhancement method utilizing visual boundary priors. Our approach projects 3D point clouds onto 2D planes and applies 2D image segmentation to generate pseudo-labels for foreground objects. These labels are subsequently back-projected into 3D space and used to train an initial segmentation model. We further refine this process by incorporating prior knowledge from projected images to filter the predicted labels, followed by model retraining. We introduce this technique as the Foreground Boundary Prior (FBP), a versatile, plug-and-play module designed to enhance various weakly supervised point cloud segmentation methods. We demonstrate the efficacy of our approach on the widely-used 2D-3D-Semantic dataset, employing both random-sample and bounding-box based weak labeling strategies. Our experimental results show significant improvements in segmentation performance across different architectural backbones, highlighting the method's effectiveness and portability.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142515308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-Level Transfer Functions Using t-SNE for Data Segmentation in Direct Volume Rendering. 利用 t-SNE 在直接体积渲染中进行数据分割的两级传递函数
Pub Date : 2024-10-21 DOI: 10.1109/TVCG.2024.3484471
Sangbong Yoo, Seokyeon Kim, Yun Jang

The transfer function (TF) design is crucial for enhancing the visualization quality and understanding of volume data in volume rendering. Recent research has proposed various multidimensional TFs to utilize diverse attributes extracted from volume data for controlling individual voxel rendering. Although multidimensional TFs enhance the ability to segregate data, manipulating various attributes for the rendering is cumbersome. In contrast, low-dimensional TFs are more beneficial as they are easier to manage, but separating volume data during rendering is problematic. This paper proposes a novel approach, a two-level transfer function, for rendering volume data by reducing TF dimensions. The proposed technique involves extracting multidimensional TF attributes from volume data and applying t-Stochastic Neighbor Embedding (t-SNE) to the TF attributes for dimensionality reduction. The two-level transfer function combines the classical 2D TF and t-SNE TF in the conventional direct volume rendering pipeline. The proposed approach is evaluated by comparing segments in t-SNE TF and rendering images using various volume datasets. The results of this study demonstrate that the proposed approach can effectively allow us to manipulate multidimensional attributes easily while maintaining high visualization quality in volume rendering.

转移函数(TF)的设计对于在体绘制中提高可视化质量和理解体数据至关重要。最近的研究提出了各种多维传递函数,以利用从体数据中提取的各种属性来控制单个体素的渲染。虽然多维 TF 增强了数据分离的能力,但操作各种属性进行渲染非常麻烦。相比之下,低维 TF 更为有利,因为它们更易于管理,但在渲染过程中分离体素数据却存在问题。本文提出了一种新方法--两级传递函数,通过降低 TF 维度来渲染体积数据。建议的技术包括从体积数据中提取多维 TF 属性,并对 TF 属性应用 t-Stochastic Neighbor Embedding(t-SNE)进行降维。在传统的直接体积渲染管道中,两级传递函数结合了经典的二维 TF 和 t-SNE TF。通过比较 t-SNE TF 中的片段和使用各种体积数据集渲染的图像,对所提出的方法进行了评估。研究结果表明,所提出的方法可以有效地让我们轻松处理多维属性,同时在体积渲染中保持较高的可视化质量。
{"title":"Two-Level Transfer Functions Using t-SNE for Data Segmentation in Direct Volume Rendering.","authors":"Sangbong Yoo, Seokyeon Kim, Yun Jang","doi":"10.1109/TVCG.2024.3484471","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3484471","url":null,"abstract":"<p><p>The transfer function (TF) design is crucial for enhancing the visualization quality and understanding of volume data in volume rendering. Recent research has proposed various multidimensional TFs to utilize diverse attributes extracted from volume data for controlling individual voxel rendering. Although multidimensional TFs enhance the ability to segregate data, manipulating various attributes for the rendering is cumbersome. In contrast, low-dimensional TFs are more beneficial as they are easier to manage, but separating volume data during rendering is problematic. This paper proposes a novel approach, a two-level transfer function, for rendering volume data by reducing TF dimensions. The proposed technique involves extracting multidimensional TF attributes from volume data and applying t-Stochastic Neighbor Embedding (t-SNE) to the TF attributes for dimensionality reduction. The two-level transfer function combines the classical 2D TF and t-SNE TF in the conventional direct volume rendering pipeline. The proposed approach is evaluated by comparing segments in t-SNE TF and rendering images using various volume datasets. The results of this study demonstrate that the proposed approach can effectively allow us to manipulate multidimensional attributes easily while maintaining high visualization quality in volume rendering.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142515307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Frequency Nonlinear Methods for 3D Shape Measurement of Semi-Transparent Surfaces Using Projector-Camera Systems. 使用投影仪-摄像机系统测量半透明表面三维形状的多频非线性方法。
Pub Date : 2024-10-18 DOI: 10.1109/TVCG.2024.3477413
Frank Billy Djupkep Dizeu, Michel Picard, Marc-Antoine Drouin, Jonathan Boisvert

Measuring the 3D shape of semi-transparent surfaces with projector-camera 3D scanners is a difficult task because these surfaces weakly reflect light in a diffuse manner, and transmit a large part of the incident light. The task is even harder in the presence of participating background surfaces. The two methods proposed in this paper use sinusoidal patterns, each with a frequency chosen in the frequency range allowed by the projection optics of the projector-camera system. They differ in the way in which the camera-projector correspondence map is established, as well as in the number of patterns and the processing time required. The first method utilizes the discrete Fourier transform, performed on the intensity signal measured at a camera pixel, to inventory projector columns illuminating directly and indirectly the scene point imaged by that pixel. The second method goes beyond discrete Fourier transform and achieves the same goal by fitting a proposed analytical model to the measured intensity signal. Once the one (camera pixel) to many (projector columns) correspondence is established, a surface continuity constraint is applied to extract the one to one correspondence map linked to the semi-transparent surface. This map is used to determine the 3D point cloud of the surface by triangulation. Experimental results demonstrate the performance (accuracy, reliability) achieved by the proposed methods.

使用投影仪-相机三维扫描仪测量半透明表面的三维形状是一项艰巨的任务,因为这些表面对光线的漫反射很弱,而且大部分入射光线都会透过这些表面。在有背景表面参与的情况下,这项任务就更加困难了。本文提出的两种方法都使用正弦波图案,每种图案的频率都在投影仪-摄像机系统的投影光学系统允许的频率范围内。这两种方法在建立摄像机-投影仪对应图的方式、图案数量和所需处理时间上都有所不同。第一种方法是利用离散傅立叶变换,对相机像素测得的强度信号进行离散傅立叶变换,以盘点直接或间接照亮该像素所成像场景点的投影机柱。第二种方法超越了离散傅立叶变换,通过对测量到的强度信号拟合一个建议的分析模型来实现相同的目标。一旦建立了一个(摄像机像素)到多个(投影仪柱)的对应关系,就会应用表面连续性约束来提取与半透明表面相连的一对一对应图。该地图用于通过三角测量法确定表面的三维点云。实验结果证明了所提出方法的性能(准确性、可靠性)。
{"title":"Multi-Frequency Nonlinear Methods for 3D Shape Measurement of Semi-Transparent Surfaces Using Projector-Camera Systems.","authors":"Frank Billy Djupkep Dizeu, Michel Picard, Marc-Antoine Drouin, Jonathan Boisvert","doi":"10.1109/TVCG.2024.3477413","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3477413","url":null,"abstract":"<p><p>Measuring the 3D shape of semi-transparent surfaces with projector-camera 3D scanners is a difficult task because these surfaces weakly reflect light in a diffuse manner, and transmit a large part of the incident light. The task is even harder in the presence of participating background surfaces. The two methods proposed in this paper use sinusoidal patterns, each with a frequency chosen in the frequency range allowed by the projection optics of the projector-camera system. They differ in the way in which the camera-projector correspondence map is established, as well as in the number of patterns and the processing time required. The first method utilizes the discrete Fourier transform, performed on the intensity signal measured at a camera pixel, to inventory projector columns illuminating directly and indirectly the scene point imaged by that pixel. The second method goes beyond discrete Fourier transform and achieves the same goal by fitting a proposed analytical model to the measured intensity signal. Once the one (camera pixel) to many (projector columns) correspondence is established, a surface continuity constraint is applied to extract the one to one correspondence map linked to the semi-transparent surface. This map is used to determine the 3D point cloud of the surface by triangulation. Experimental results demonstrate the performance (accuracy, reliability) achieved by the proposed methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parametric Linear Blend Skinning Model for Multiple-Shape 3D Garments. 多形状三维服装的参数线性混合皮肤模型
Pub Date : 2024-10-18 DOI: 10.1109/TVCG.2024.3478852
Xipeng Chen, Guangrun Wang, Xiaogang Xu, Philip Torr, Liang Lin

We present a novel data-driven Parametric Linear Blend Skinning (PLBS) model meticulously crafted for generalized 3D garment dressing and animation. Previous data-driven methods are impeded by certain challenges including overreliance on human body modeling and limited adaptability across different garment shapes. Our method resolves these challenges via two goals: 1) Develop a model based on garment modeling rather than human body modeling. 2) Separately construct low-dimensional sub-spaces for modeling in-plane deformation (such as variation in garment shape and size) and out-of-plane deformation (such as deformation due to varied body size and motion). Therefore, we formulate garment deformation as a PLBS model controlled by canonical 3D garment mesh, vertex-based skinning weights and associated local patch transformation. Unlike traditional LBS models specialized for individual objects, PLBS model is capable of uniformly expressing varied garments and bodies, the in-plane deformation is encoded on the canonical 3D garment and the out-of-plane deformation is controlled by the local patch transformation. Besides, we propose novel 3D garment registration and skinning weight decomposition strategies to obtain adequate data to build PLBS model under different garment categories. Furthermore, we employ dynamic fine-tuning to complement high-frequency signals missing from LBS for unseen testing data. Experiments illustrate that our method is capable of modeling dynamics for loose-fitting garments, outperforming previous data-driven modeling methods using different sub-space modeling strategies. We showcase that our method can factorize and be generalized for varied body sizes, garment shapes, garment sizes and human motions under different garment categories.

我们介绍了一种新颖的数据驱动参数线性混合蒙皮(PLBS)模型,该模型经过精心设计,适用于通用的三维服装着装和动画制作。以往的数据驱动方法面临着一些挑战,包括过度依赖人体建模和对不同服装形状的适应性有限。我们的方法通过两个目标来解决这些难题:1) 基于服装建模而不是人体建模来开发模型。2) 为平面内变形(如服装形状和尺寸的变化)和平面外变形(如身体尺寸和运动变化引起的变形)分别构建低维子空间建模。因此,我们将服装变形表述为一个 PLBS 模型,该模型由典型三维服装网格、基于顶点的蒙皮权重和相关的局部补丁变换控制。与专门针对单个物体的传统 LBS 模型不同,PLBS 模型能够统一地表达不同的服装和人体,平面内的形变由标准三维服装编码,平面外的形变由局部补丁变换控制。此外,我们还提出了新颖的三维服装注册和蒙皮权重分解策略,以获得足够的数据来建立不同服装类别下的 PLBS 模型。此外,我们还采用了动态微调技术来补充未见测试数据中缺少的高频信号。实验表明,我们的方法能够对宽松服装进行动态建模,优于之前使用不同子空间建模策略的数据驱动建模方法。我们展示了我们的方法可以针对不同的人体尺寸、服装形状、服装尺寸和不同服装类别下的人体运动进行因子化和通用化。
{"title":"Parametric Linear Blend Skinning Model for Multiple-Shape 3D Garments.","authors":"Xipeng Chen, Guangrun Wang, Xiaogang Xu, Philip Torr, Liang Lin","doi":"10.1109/TVCG.2024.3478852","DOIUrl":"10.1109/TVCG.2024.3478852","url":null,"abstract":"<p><p>We present a novel data-driven Parametric Linear Blend Skinning (PLBS) model meticulously crafted for generalized 3D garment dressing and animation. Previous data-driven methods are impeded by certain challenges including overreliance on human body modeling and limited adaptability across different garment shapes. Our method resolves these challenges via two goals: 1) Develop a model based on garment modeling rather than human body modeling. 2) Separately construct low-dimensional sub-spaces for modeling in-plane deformation (such as variation in garment shape and size) and out-of-plane deformation (such as deformation due to varied body size and motion). Therefore, we formulate garment deformation as a PLBS model controlled by canonical 3D garment mesh, vertex-based skinning weights and associated local patch transformation. Unlike traditional LBS models specialized for individual objects, PLBS model is capable of uniformly expressing varied garments and bodies, the in-plane deformation is encoded on the canonical 3D garment and the out-of-plane deformation is controlled by the local patch transformation. Besides, we propose novel 3D garment registration and skinning weight decomposition strategies to obtain adequate data to build PLBS model under different garment categories. Furthermore, we employ dynamic fine-tuning to complement high-frequency signals missing from LBS for unseen testing data. Experiments illustrate that our method is capable of modeling dynamics for loose-fitting garments, outperforming previous data-driven modeling methods using different sub-space modeling strategies. We showcase that our method can factorize and be generalized for varied body sizes, garment shapes, garment sizes and human motions under different garment categories.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cybersickness Abatement from Repeated Exposure to VR with Reduced Discomfort. 反复接触虚拟现实,减轻晕机不适感
Pub Date : 2024-10-17 DOI: 10.1109/TVCG.2024.3483070
Taylor A Doty, Jonathan W Kelly, Stephen B Gilbert, Michael C Dorneich

Cybersickness, or sickness induced by virtual reality (VR), negatively impacts the enjoyment and adoption of the technology. One method that has been used to reduce sickness is repeated exposure to VR, herein Cybersickness Abatement from Repeated Exposure (CARE). However, high sickness levels during repeated exposure may discourage some users from returning. Field of view (FOV) restriction reduces cybersickness by minimizing visual motion in the periphery, but also negatively affects the user's visual experience. This study explored whether CARE that occurs with FOV restriction generalizes to a full FOV experience. Participants played a VR game for up to 20 minutes. Those in the Repeated Exposure Condition played the same VR game on four separate days, experiencing FOV restriction during the first three days and no FOV restriction on the fourth day. Results indicated significant CARE with FOV restriction (Days 1-3). Further, cybersickness on Day 4, without FOV restriction, was significantly lower than that of participants in the Single Exposure Condition, who experienced the game without FOV restriction only on one day. The current findings show that significant CARE can occur while experiencing minimal cybersickness. Results are considered in the context of multiple theoretical explanations for CARE, including sensory rearrangement, adaptation, habituation, and postural control.

虚拟现实(VR)引起的网络晕眩症(Cybersickness)或病态,对享受和采用该技术造成了负面影响。减少晕眩的一种方法是反复接触虚拟现实,即 "反复接触缓解晕眩(CARE)"。然而,重复接触时的高晕眩水平可能会阻碍一些用户再次体验。视场角(FOV)限制通过减少外围的视觉运动来减轻晕眩,但也会对用户的视觉体验产生负面影响。本研究探讨了视场角限制引起的晕眩是否会影响全视场角体验。参与者玩了长达 20 分钟的 VR 游戏。重复暴露条件下的参与者在四天内分别玩了同一款 VR 游戏,前三天体验了视场角限制,第四天则没有视场角限制。结果表明,视场角限制(第 1-3 天)对 CARE 有明显影响。此外,第 4 天在没有 FOV 限制的情况下,晕眩感明显低于单次暴露条件下的参与者,后者只有一天在没有 FOV 限制的情况下体验了游戏。目前的研究结果表明,在体验最低程度的晕机感的同时,也可以出现明显的 CARE。研究结果结合了 CARE 的多种理论解释,包括感觉重新排列、适应、习惯化和姿势控制。
{"title":"Cybersickness Abatement from Repeated Exposure to VR with Reduced Discomfort.","authors":"Taylor A Doty, Jonathan W Kelly, Stephen B Gilbert, Michael C Dorneich","doi":"10.1109/TVCG.2024.3483070","DOIUrl":"10.1109/TVCG.2024.3483070","url":null,"abstract":"<p><p>Cybersickness, or sickness induced by virtual reality (VR), negatively impacts the enjoyment and adoption of the technology. One method that has been used to reduce sickness is repeated exposure to VR, herein Cybersickness Abatement from Repeated Exposure (CARE). However, high sickness levels during repeated exposure may discourage some users from returning. Field of view (FOV) restriction reduces cybersickness by minimizing visual motion in the periphery, but also negatively affects the user's visual experience. This study explored whether CARE that occurs with FOV restriction generalizes to a full FOV experience. Participants played a VR game for up to 20 minutes. Those in the Repeated Exposure Condition played the same VR game on four separate days, experiencing FOV restriction during the first three days and no FOV restriction on the fourth day. Results indicated significant CARE with FOV restriction (Days 1-3). Further, cybersickness on Day 4, without FOV restriction, was significantly lower than that of participants in the Single Exposure Condition, who experienced the game without FOV restriction only on one day. The current findings show that significant CARE can occur while experiencing minimal cybersickness. Results are considered in the context of multiple theoretical explanations for CARE, including sensory rearrangement, adaptation, habituation, and postural control.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Effectiveness of Interactivity in Contour-Based Geospatial Visualizations. 评估基于轮廓的地理空间可视化的互动效果。
Pub Date : 2024-10-16 DOI: 10.1109/TVCG.2024.3481354
Abdullah-Al-Raihan Nayeem, Dongyun Han, William J Tolone, Isaac Cho

Contour maps are an essential tool for exploring spatial features of the terrain, such as distance, directions, and surface gradient among the contour areas. User interactions in contour-based visualizations create approaches to visual analysis that are noticeably different from the perspective of human cognition. As such, various interactive approaches have been introduced to improve system usability and enhance human cognition for complex and large-scale spatial data exploration. However, what user interaction means for contour maps, its purpose, when to leverage, and design primitives have yet to be investigated in the context of analysis tasks. Therefore, further research is needed to better understand and quantify the potentials and benefits offered by user interactions in contour-based geospatial visualizations designed to support analytical tasks. In this paper, we present a contour-based interactive geospatial visualization designed for analytical tasks. We conducted a crowd-sourced user study (N=62) to examine the impact of interactive features on analysis using contour-based geospatial visualizations. Our results show that the interactive features aid in their data analysis and understanding in terms of spatial data extent, map layout, task complexity, and user expertise. Finally, we discuss our findings in-depth, which will serve as guidelines for future design and implementation of interactive features in support of case-specific analytical tasks on contour-based geospatial views.

等高线图是探索地形空间特征(如等高线区域之间的距离、方向和表面梯度)的重要工具。从人类认知的角度来看,基于等高线的可视化中的用户交互创造了明显不同的视觉分析方法。因此,人们引入了各种交互方法,以提高系统的可用性,并增强人类对复杂和大规模空间数据探索的认知。然而,在分析任务的背景下,用户交互对等值线图的意义、目的、何时使用以及设计基元都有待研究。因此,需要开展进一步的研究,以更好地理解和量化用户交互在基于轮廓的地理空间可视化中提供的潜力和益处,从而为分析任务提供支持。在本文中,我们介绍了为分析任务设计的基于轮廓的交互式地理空间可视化。我们进行了一项众包用户研究(N=62),以考察交互功能对使用基于轮廓的地理空间可视化进行分析的影响。研究结果表明,在空间数据范围、地图布局、任务复杂性和用户专业知识等方面,交互式功能有助于用户分析和理解数据。最后,我们深入讨论了我们的研究结果,这些结果将作为未来设计和实施交互式功能的指南,以支持基于等高线的地理空间视图上的特定案例分析任务。
{"title":"Evaluating Effectiveness of Interactivity in Contour-Based Geospatial Visualizations.","authors":"Abdullah-Al-Raihan Nayeem, Dongyun Han, William J Tolone, Isaac Cho","doi":"10.1109/TVCG.2024.3481354","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3481354","url":null,"abstract":"<p><p>Contour maps are an essential tool for exploring spatial features of the terrain, such as distance, directions, and surface gradient among the contour areas. User interactions in contour-based visualizations create approaches to visual analysis that are noticeably different from the perspective of human cognition. As such, various interactive approaches have been introduced to improve system usability and enhance human cognition for complex and large-scale spatial data exploration. However, what user interaction means for contour maps, its purpose, when to leverage, and design primitives have yet to be investigated in the context of analysis tasks. Therefore, further research is needed to better understand and quantify the potentials and benefits offered by user interactions in contour-based geospatial visualizations designed to support analytical tasks. In this paper, we present a contour-based interactive geospatial visualization designed for analytical tasks. We conducted a crowd-sourced user study (N=62) to examine the impact of interactive features on analysis using contour-based geospatial visualizations. Our results show that the interactive features aid in their data analysis and understanding in terms of spatial data extent, map layout, task complexity, and user expertise. Finally, we discuss our findings in-depth, which will serve as guidelines for future design and implementation of interactive features in support of case-specific analytical tasks on contour-based geospatial views.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1