首页 > 最新文献

IEEE transactions on visualization and computer graphics最新文献

英文 中文
Multi-Frequency Nonlinear Methods for 3D Shape Measurement of Semi-Transparent Surfaces Using Projector-Camera Systems. 使用投影仪-摄像机系统测量半透明表面三维形状的多频非线性方法。
Pub Date : 2024-10-18 DOI: 10.1109/TVCG.2024.3477413
Frank Billy Djupkep Dizeu, Michel Picard, Marc-Antoine Drouin, Jonathan Boisvert

Measuring the 3D shape of semi-transparent surfaces with projector-camera 3D scanners is a difficult task because these surfaces weakly reflect light in a diffuse manner, and transmit a large part of the incident light. The task is even harder in the presence of participating background surfaces. The two methods proposed in this paper use sinusoidal patterns, each with a frequency chosen in the frequency range allowed by the projection optics of the projector-camera system. They differ in the way in which the camera-projector correspondence map is established, as well as in the number of patterns and the processing time required. The first method utilizes the discrete Fourier transform, performed on the intensity signal measured at a camera pixel, to inventory projector columns illuminating directly and indirectly the scene point imaged by that pixel. The second method goes beyond discrete Fourier transform and achieves the same goal by fitting a proposed analytical model to the measured intensity signal. Once the one (camera pixel) to many (projector columns) correspondence is established, a surface continuity constraint is applied to extract the one to one correspondence map linked to the semi-transparent surface. This map is used to determine the 3D point cloud of the surface by triangulation. Experimental results demonstrate the performance (accuracy, reliability) achieved by the proposed methods.

使用投影仪-相机三维扫描仪测量半透明表面的三维形状是一项艰巨的任务,因为这些表面对光线的漫反射很弱,而且大部分入射光线都会透过这些表面。在有背景表面参与的情况下,这项任务就更加困难了。本文提出的两种方法都使用正弦波图案,每种图案的频率都在投影仪-摄像机系统的投影光学系统允许的频率范围内。这两种方法在建立摄像机-投影仪对应图的方式、图案数量和所需处理时间上都有所不同。第一种方法是利用离散傅立叶变换,对相机像素测得的强度信号进行离散傅立叶变换,以盘点直接或间接照亮该像素所成像场景点的投影机柱。第二种方法超越了离散傅立叶变换,通过对测量到的强度信号拟合一个建议的分析模型来实现相同的目标。一旦建立了一个(摄像机像素)到多个(投影仪柱)的对应关系,就会应用表面连续性约束来提取与半透明表面相连的一对一对应图。该地图用于通过三角测量法确定表面的三维点云。实验结果证明了所提出方法的性能(准确性、可靠性)。
{"title":"Multi-Frequency Nonlinear Methods for 3D Shape Measurement of Semi-Transparent Surfaces Using Projector-Camera Systems.","authors":"Frank Billy Djupkep Dizeu, Michel Picard, Marc-Antoine Drouin, Jonathan Boisvert","doi":"10.1109/TVCG.2024.3477413","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3477413","url":null,"abstract":"<p><p>Measuring the 3D shape of semi-transparent surfaces with projector-camera 3D scanners is a difficult task because these surfaces weakly reflect light in a diffuse manner, and transmit a large part of the incident light. The task is even harder in the presence of participating background surfaces. The two methods proposed in this paper use sinusoidal patterns, each with a frequency chosen in the frequency range allowed by the projection optics of the projector-camera system. They differ in the way in which the camera-projector correspondence map is established, as well as in the number of patterns and the processing time required. The first method utilizes the discrete Fourier transform, performed on the intensity signal measured at a camera pixel, to inventory projector columns illuminating directly and indirectly the scene point imaged by that pixel. The second method goes beyond discrete Fourier transform and achieves the same goal by fitting a proposed analytical model to the measured intensity signal. Once the one (camera pixel) to many (projector columns) correspondence is established, a surface continuity constraint is applied to extract the one to one correspondence map linked to the semi-transparent surface. This map is used to determine the 3D point cloud of the surface by triangulation. Experimental results demonstrate the performance (accuracy, reliability) achieved by the proposed methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parametric Linear Blend Skinning Model for Multiple-Shape 3D Garments. 多形状三维服装的参数线性混合皮肤模型
Pub Date : 2024-10-18 DOI: 10.1109/TVCG.2024.3478852
Xipeng Chen, Guangrun Wang, Xiaogang Xu, Philip Torr, Liang Lin

We present a novel data-driven Parametric Linear Blend Skinning (PLBS) model meticulously crafted for generalized 3D garment dressing and animation. Previous data-driven methods are impeded by certain challenges including overreliance on human body modeling and limited adaptability across different garment shapes. Our method resolves these challenges via two goals: 1) Develop a model based on garment modeling rather than human body modeling. 2) Separately construct low-dimensional sub-spaces for modeling in-plane deformation (such as variation in garment shape and size) and out-of-plane deformation (such as deformation due to varied body size and motion). Therefore, we formulate garment deformation as a PLBS model controlled by canonical 3D garment mesh, vertex-based skinning weights and associated local patch transformation. Unlike traditional LBS models specialized for individual objects, PLBS model is capable of uniformly expressing varied garments and bodies, the in-plane deformation is encoded on the canonical 3D garment and the out-of-plane deformation is controlled by the local patch transformation. Besides, we propose novel 3D garment registration and skinning weight decomposition strategies to obtain adequate data to build PLBS model under different garment categories. Furthermore, we employ dynamic fine-tuning to complement high-frequency signals missing from LBS for unseen testing data. Experiments illustrate that our method is capable of modeling dynamics for loose-fitting garments, outperforming previous data-driven modeling methods using different sub-space modeling strategies. We showcase that our method can factorize and be generalized for varied body sizes, garment shapes, garment sizes and human motions under different garment categories.

我们介绍了一种新颖的数据驱动参数线性混合蒙皮(PLBS)模型,该模型经过精心设计,适用于通用的三维服装着装和动画制作。以往的数据驱动方法面临着一些挑战,包括过度依赖人体建模和对不同服装形状的适应性有限。我们的方法通过两个目标来解决这些难题:1) 基于服装建模而不是人体建模来开发模型。2) 为平面内变形(如服装形状和尺寸的变化)和平面外变形(如身体尺寸和运动变化引起的变形)分别构建低维子空间建模。因此,我们将服装变形表述为一个 PLBS 模型,该模型由典型三维服装网格、基于顶点的蒙皮权重和相关的局部补丁变换控制。与专门针对单个物体的传统 LBS 模型不同,PLBS 模型能够统一地表达不同的服装和人体,平面内的形变由标准三维服装编码,平面外的形变由局部补丁变换控制。此外,我们还提出了新颖的三维服装注册和蒙皮权重分解策略,以获得足够的数据来建立不同服装类别下的 PLBS 模型。此外,我们还采用了动态微调技术来补充未见测试数据中缺少的高频信号。实验表明,我们的方法能够对宽松服装进行动态建模,优于之前使用不同子空间建模策略的数据驱动建模方法。我们展示了我们的方法可以针对不同的人体尺寸、服装形状、服装尺寸和不同服装类别下的人体运动进行因子化和通用化。
{"title":"Parametric Linear Blend Skinning Model for Multiple-Shape 3D Garments.","authors":"Xipeng Chen, Guangrun Wang, Xiaogang Xu, Philip Torr, Liang Lin","doi":"10.1109/TVCG.2024.3478852","DOIUrl":"10.1109/TVCG.2024.3478852","url":null,"abstract":"<p><p>We present a novel data-driven Parametric Linear Blend Skinning (PLBS) model meticulously crafted for generalized 3D garment dressing and animation. Previous data-driven methods are impeded by certain challenges including overreliance on human body modeling and limited adaptability across different garment shapes. Our method resolves these challenges via two goals: 1) Develop a model based on garment modeling rather than human body modeling. 2) Separately construct low-dimensional sub-spaces for modeling in-plane deformation (such as variation in garment shape and size) and out-of-plane deformation (such as deformation due to varied body size and motion). Therefore, we formulate garment deformation as a PLBS model controlled by canonical 3D garment mesh, vertex-based skinning weights and associated local patch transformation. Unlike traditional LBS models specialized for individual objects, PLBS model is capable of uniformly expressing varied garments and bodies, the in-plane deformation is encoded on the canonical 3D garment and the out-of-plane deformation is controlled by the local patch transformation. Besides, we propose novel 3D garment registration and skinning weight decomposition strategies to obtain adequate data to build PLBS model under different garment categories. Furthermore, we employ dynamic fine-tuning to complement high-frequency signals missing from LBS for unseen testing data. Experiments illustrate that our method is capable of modeling dynamics for loose-fitting garments, outperforming previous data-driven modeling methods using different sub-space modeling strategies. We showcase that our method can factorize and be generalized for varied body sizes, garment shapes, garment sizes and human motions under different garment categories.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cybersickness Abatement from Repeated Exposure to VR with Reduced Discomfort. 反复接触虚拟现实,减轻晕机不适感
Pub Date : 2024-10-17 DOI: 10.1109/TVCG.2024.3483070
Taylor A Doty, Jonathan W Kelly, Stephen B Gilbert, Michael C Dorneich

Cybersickness, or sickness induced by virtual reality (VR), negatively impacts the enjoyment and adoption of the technology. One method that has been used to reduce sickness is repeated exposure to VR, herein Cybersickness Abatement from Repeated Exposure (CARE). However, high sickness levels during repeated exposure may discourage some users from returning. Field of view (FOV) restriction reduces cybersickness by minimizing visual motion in the periphery, but also negatively affects the user's visual experience. This study explored whether CARE that occurs with FOV restriction generalizes to a full FOV experience. Participants played a VR game for up to 20 minutes. Those in the Repeated Exposure Condition played the same VR game on four separate days, experiencing FOV restriction during the first three days and no FOV restriction on the fourth day. Results indicated significant CARE with FOV restriction (Days 1-3). Further, cybersickness on Day 4, without FOV restriction, was significantly lower than that of participants in the Single Exposure Condition, who experienced the game without FOV restriction only on one day. The current findings show that significant CARE can occur while experiencing minimal cybersickness. Results are considered in the context of multiple theoretical explanations for CARE, including sensory rearrangement, adaptation, habituation, and postural control.

虚拟现实(VR)引起的网络晕眩症(Cybersickness)或病态,对享受和采用该技术造成了负面影响。减少晕眩的一种方法是反复接触虚拟现实,即 "反复接触缓解晕眩(CARE)"。然而,重复接触时的高晕眩水平可能会阻碍一些用户再次体验。视场角(FOV)限制通过减少外围的视觉运动来减轻晕眩,但也会对用户的视觉体验产生负面影响。本研究探讨了视场角限制引起的晕眩是否会影响全视场角体验。参与者玩了长达 20 分钟的 VR 游戏。重复暴露条件下的参与者在四天内分别玩了同一款 VR 游戏,前三天体验了视场角限制,第四天则没有视场角限制。结果表明,视场角限制(第 1-3 天)对 CARE 有明显影响。此外,第 4 天在没有 FOV 限制的情况下,晕眩感明显低于单次暴露条件下的参与者,后者只有一天在没有 FOV 限制的情况下体验了游戏。目前的研究结果表明,在体验最低程度的晕机感的同时,也可以出现明显的 CARE。研究结果结合了 CARE 的多种理论解释,包括感觉重新排列、适应、习惯化和姿势控制。
{"title":"Cybersickness Abatement from Repeated Exposure to VR with Reduced Discomfort.","authors":"Taylor A Doty, Jonathan W Kelly, Stephen B Gilbert, Michael C Dorneich","doi":"10.1109/TVCG.2024.3483070","DOIUrl":"10.1109/TVCG.2024.3483070","url":null,"abstract":"<p><p>Cybersickness, or sickness induced by virtual reality (VR), negatively impacts the enjoyment and adoption of the technology. One method that has been used to reduce sickness is repeated exposure to VR, herein Cybersickness Abatement from Repeated Exposure (CARE). However, high sickness levels during repeated exposure may discourage some users from returning. Field of view (FOV) restriction reduces cybersickness by minimizing visual motion in the periphery, but also negatively affects the user's visual experience. This study explored whether CARE that occurs with FOV restriction generalizes to a full FOV experience. Participants played a VR game for up to 20 minutes. Those in the Repeated Exposure Condition played the same VR game on four separate days, experiencing FOV restriction during the first three days and no FOV restriction on the fourth day. Results indicated significant CARE with FOV restriction (Days 1-3). Further, cybersickness on Day 4, without FOV restriction, was significantly lower than that of participants in the Single Exposure Condition, who experienced the game without FOV restriction only on one day. The current findings show that significant CARE can occur while experiencing minimal cybersickness. Results are considered in the context of multiple theoretical explanations for CARE, including sensory rearrangement, adaptation, habituation, and postural control.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Effectiveness of Interactivity in Contour-Based Geospatial Visualizations. 评估基于轮廓的地理空间可视化的互动效果。
Pub Date : 2024-10-16 DOI: 10.1109/TVCG.2024.3481354
Abdullah-Al-Raihan Nayeem, Dongyun Han, William J Tolone, Isaac Cho

Contour maps are an essential tool for exploring spatial features of the terrain, such as distance, directions, and surface gradient among the contour areas. User interactions in contour-based visualizations create approaches to visual analysis that are noticeably different from the perspective of human cognition. As such, various interactive approaches have been introduced to improve system usability and enhance human cognition for complex and large-scale spatial data exploration. However, what user interaction means for contour maps, its purpose, when to leverage, and design primitives have yet to be investigated in the context of analysis tasks. Therefore, further research is needed to better understand and quantify the potentials and benefits offered by user interactions in contour-based geospatial visualizations designed to support analytical tasks. In this paper, we present a contour-based interactive geospatial visualization designed for analytical tasks. We conducted a crowd-sourced user study (N=62) to examine the impact of interactive features on analysis using contour-based geospatial visualizations. Our results show that the interactive features aid in their data analysis and understanding in terms of spatial data extent, map layout, task complexity, and user expertise. Finally, we discuss our findings in-depth, which will serve as guidelines for future design and implementation of interactive features in support of case-specific analytical tasks on contour-based geospatial views.

等高线图是探索地形空间特征(如等高线区域之间的距离、方向和表面梯度)的重要工具。从人类认知的角度来看,基于等高线的可视化中的用户交互创造了明显不同的视觉分析方法。因此,人们引入了各种交互方法,以提高系统的可用性,并增强人类对复杂和大规模空间数据探索的认知。然而,在分析任务的背景下,用户交互对等值线图的意义、目的、何时使用以及设计基元都有待研究。因此,需要开展进一步的研究,以更好地理解和量化用户交互在基于轮廓的地理空间可视化中提供的潜力和益处,从而为分析任务提供支持。在本文中,我们介绍了为分析任务设计的基于轮廓的交互式地理空间可视化。我们进行了一项众包用户研究(N=62),以考察交互功能对使用基于轮廓的地理空间可视化进行分析的影响。研究结果表明,在空间数据范围、地图布局、任务复杂性和用户专业知识等方面,交互式功能有助于用户分析和理解数据。最后,我们深入讨论了我们的研究结果,这些结果将作为未来设计和实施交互式功能的指南,以支持基于等高线的地理空间视图上的特定案例分析任务。
{"title":"Evaluating Effectiveness of Interactivity in Contour-Based Geospatial Visualizations.","authors":"Abdullah-Al-Raihan Nayeem, Dongyun Han, William J Tolone, Isaac Cho","doi":"10.1109/TVCG.2024.3481354","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3481354","url":null,"abstract":"<p><p>Contour maps are an essential tool for exploring spatial features of the terrain, such as distance, directions, and surface gradient among the contour areas. User interactions in contour-based visualizations create approaches to visual analysis that are noticeably different from the perspective of human cognition. As such, various interactive approaches have been introduced to improve system usability and enhance human cognition for complex and large-scale spatial data exploration. However, what user interaction means for contour maps, its purpose, when to leverage, and design primitives have yet to be investigated in the context of analysis tasks. Therefore, further research is needed to better understand and quantify the potentials and benefits offered by user interactions in contour-based geospatial visualizations designed to support analytical tasks. In this paper, we present a contour-based interactive geospatial visualization designed for analytical tasks. We conducted a crowd-sourced user study (N=62) to examine the impact of interactive features on analysis using contour-based geospatial visualizations. Our results show that the interactive features aid in their data analysis and understanding in terms of spatial data extent, map layout, task complexity, and user expertise. Finally, we discuss our findings in-depth, which will serve as guidelines for future design and implementation of interactive features in support of case-specific analytical tasks on contour-based geospatial views.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Playwright: Authoring Data Videos With Annotated Narration. 数据剧作家:制作带有注释旁白的数据视频。
Pub Date : 2024-10-16 DOI: 10.1109/TVCG.2024.3477926
Leixian Shen, Haotian Li, Yun Wang, Tianqi Luo, Yuyu Luo, Huamin Qu

Creating data videos that effectively narrate stories with animated visuals requires substantial effort and expertise. A promising research trend is leveraging the easy-to-use natural language (NL) interaction to automatically synthesize data video components from narrative content like text narrations, or NL commands that specify user-required designs. Nevertheless, previous research has overlooked the integration of narrative content and specific design authoring commands, leading to generated results that lack customization or fail to seamlessly fit into the narrative context. To address these issues, we introduce a novel paradigm for creating data videos, which seamlessly integrates users' authoring and narrative intents in a unified format called annotated narration, allowing users to incorporate NL commands for design authoring as inline annotations within the narration text. Informed by a formative study on users' preference for annotated narration, we develop a prototype system named Data Playwright that embodies this paradigm for effective creation of data videos. Within Data Playwright, users can write annotated narration based on uploaded visualizations. The system's interpreter automatically understands users' inputs and synthesizes data videos with narration-animation interplay, powered by large language models. Finally, users can preview and fine-tune the video. A user study demonstrated that participants can effectively create data videos with Data Playwright by effortlessly articulating their desired outcomes through annotated narration.

制作数据视频,用动画视觉效果有效地叙述故事,需要大量的精力和专业知识。一个很有前景的研究趋势是利用易于使用的自然语言(NL)交互方式,从文本叙述等叙述内容或指定用户所需设计的 NL 命令中自动合成数据视频组件。然而,以往的研究忽视了叙事内容与特定设计创作命令的整合,导致生成的结果缺乏定制性或无法无缝融入叙事语境。为了解决这些问题,我们引入了一种用于创建数据视频的新范例,这种范例将用户的创作意图和叙述意图无缝整合到一种称为注释叙述的统一格式中,允许用户将用于设计创作的 NL 命令作为内嵌注释纳入叙述文本中。通过对用户对注释式叙述的偏好进行形成性研究,我们开发了一个名为 Data Playwright 的原型系统,它体现了这种有效创建数据视频的范例。在 Data Playwright 中,用户可以根据上传的可视化内容编写注释旁白。系统的解释器会自动理解用户的输入,并在大型语言模型的支持下合成具有旁白-动画互动的数据视频。最后,用户可以预览和微调视频。一项用户研究表明,参与者可以使用 Data Playwright 毫不费力地通过注释旁白阐明他们所期望的结果,从而有效地创建数据视频。
{"title":"Data Playwright: Authoring Data Videos With Annotated Narration.","authors":"Leixian Shen, Haotian Li, Yun Wang, Tianqi Luo, Yuyu Luo, Huamin Qu","doi":"10.1109/TVCG.2024.3477926","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3477926","url":null,"abstract":"<p><p>Creating data videos that effectively narrate stories with animated visuals requires substantial effort and expertise. A promising research trend is leveraging the easy-to-use natural language (NL) interaction to automatically synthesize data video components from narrative content like text narrations, or NL commands that specify user-required designs. Nevertheless, previous research has overlooked the integration of narrative content and specific design authoring commands, leading to generated results that lack customization or fail to seamlessly fit into the narrative context. To address these issues, we introduce a novel paradigm for creating data videos, which seamlessly integrates users' authoring and narrative intents in a unified format called annotated narration, allowing users to incorporate NL commands for design authoring as inline annotations within the narration text. Informed by a formative study on users' preference for annotated narration, we develop a prototype system named Data Playwright that embodies this paradigm for effective creation of data videos. Within Data Playwright, users can write annotated narration based on uploaded visualizations. The system's interpreter automatically understands users' inputs and synthesizes data videos with narration-animation interplay, powered by large language models. Finally, users can preview and fine-tune the video. A user study demonstrated that participants can effectively create data videos with Data Playwright by effortlessly articulating their desired outcomes through annotated narration.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FR-CSG: Fast and Reliable Modeling for Constructive Solid Geometry. FR-CSG:快速可靠的建设性实体几何建模。
Pub Date : 2024-10-15 DOI: 10.1109/TVCG.2024.3481278
Jiaxi Chen, Zeyu Shen, Mingyang Zhao, Xiaohong Jia, Dong-Ming Yan, Wencheng Wang

Reconstructing CSG trees from CAD models is a critical subject in reverse engineering. While there have been notable advancements in CSG reconstruction, challenges persist in capturing geometric details and achieving efficiency. Additionally, since non-axis-aligned volumetric primitives cannot maintain coplanar characteristics due to discretization errors, existing Boolean operations often lead to zero-volume surfaces and suffer from topological errors during the CSG modeling process. To address these issues, we propose a novel workflow to achieve fast CSG reconstruction and reliable forward modeling. First, we employ feature removal and model subdivision techniques to decompose models into sub-components. This significantly expedites the reconstruction by simplifying the complexity of the models. Then, we introduce a more reasonable method for primitive generation and filtering, and utilize a size-related optimization approach to reconstruct CSG trees. By re-adding features as additional nodes in the CSG trees, our method not only preserves intricate details but also ensures the conciseness, semantic integrity, and editability of the resulting CSG tree. Finally, we develop a coplanar primitive discretization method that represents primitives as large planes and extracts the original triangles after intersection. We extend the classification of triangles and incorporate a coplanar-aware Boolean tree assessment technique, allowing us to achieve manifold and watertight modeling results without zero-volume surfaces, even in extreme degenerate cases. We demonstrate the superiority of our method over state-of-the-art approaches. Moreover, the reconstructed CSG trees generated by our method contain extensive semantic information, enabling diverse model editing tasks.

从 CAD 模型重建 CSG 树是逆向工程中的一个重要课题。尽管 CSG 重建技术取得了显著进步,但在捕捉几何细节和实现效率方面仍然存在挑战。此外,由于离散化误差导致非轴对齐的体积基元无法保持共面特性,现有的布尔运算通常会导致零体积表面,并在 CSG 建模过程中出现拓扑误差。为了解决这些问题,我们提出了一种新颖的工作流程,以实现快速 CSG 重建和可靠的正向建模。首先,我们采用特征去除和模型细分技术将模型分解为子组件。通过简化模型的复杂性,这大大加快了重建速度。然后,我们引入了更合理的基元生成和过滤方法,并利用与大小相关的优化方法重建 CSG 树。通过在 CSG 树中重新添加特征作为附加节点,我们的方法不仅保留了复杂的细节,还确保了 CSG 树的简洁性、语义完整性和可编辑性。最后,我们开发了一种共面基元离散化方法,该方法将基元表示为大平面,并提取相交后的原始三角形。我们扩展了三角形的分类,并结合了共面感知布尔树评估技术,使我们即使在极端退化的情况下,也能在没有零体积表面的情况下实现流形和不漏水的建模结果。我们证明了我们的方法优于最先进的方法。此外,我们的方法生成的重构 CSG 树包含大量语义信息,可以完成各种模型编辑任务。
{"title":"FR-CSG: Fast and Reliable Modeling for Constructive Solid Geometry.","authors":"Jiaxi Chen, Zeyu Shen, Mingyang Zhao, Xiaohong Jia, Dong-Ming Yan, Wencheng Wang","doi":"10.1109/TVCG.2024.3481278","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3481278","url":null,"abstract":"<p><p>Reconstructing CSG trees from CAD models is a critical subject in reverse engineering. While there have been notable advancements in CSG reconstruction, challenges persist in capturing geometric details and achieving efficiency. Additionally, since non-axis-aligned volumetric primitives cannot maintain coplanar characteristics due to discretization errors, existing Boolean operations often lead to zero-volume surfaces and suffer from topological errors during the CSG modeling process. To address these issues, we propose a novel workflow to achieve fast CSG reconstruction and reliable forward modeling. First, we employ feature removal and model subdivision techniques to decompose models into sub-components. This significantly expedites the reconstruction by simplifying the complexity of the models. Then, we introduce a more reasonable method for primitive generation and filtering, and utilize a size-related optimization approach to reconstruct CSG trees. By re-adding features as additional nodes in the CSG trees, our method not only preserves intricate details but also ensures the conciseness, semantic integrity, and editability of the resulting CSG tree. Finally, we develop a coplanar primitive discretization method that represents primitives as large planes and extracts the original triangles after intersection. We extend the classification of triangles and incorporate a coplanar-aware Boolean tree assessment technique, allowing us to achieve manifold and watertight modeling results without zero-volume surfaces, even in extreme degenerate cases. We demonstrate the superiority of our method over state-of-the-art approaches. Moreover, the reconstructed CSG trees generated by our method contain extensive semantic information, enabling diverse model editing tasks.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE ISMAR 2024 Science & Technology Program Committee Members for Journal Papers IEEE ISMAR 2024 科学与技术项目委员会期刊论文委员
Pub Date : 2024-10-10 DOI: 10.1109/TVCG.2024.3453150
{"title":"IEEE ISMAR 2024 Science & Technology Program Committee Members for Journal Papers","authors":"","doi":"10.1109/TVCG.2024.3453150","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3453150","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"ix-xi"},"PeriodicalIF":0.0,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713481","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE ISMAR 2024 Steering Committee Members IEEE ISMAR 2024 指导委员会成员
Pub Date : 2024-10-10 DOI: 10.1109/TVCG.2024.3453149
{"title":"IEEE ISMAR 2024 Steering Committee Members","authors":"","doi":"10.1109/TVCG.2024.3453149","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3453149","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"viii-viii"},"PeriodicalIF":0.0,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713480","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Message from the ISMAR 2024 Science and Technology Program Chairs and TVCG Guest Editors ISMAR 2024 科学与技术项目主席和 TVCG 特邀编辑的致辞
Pub Date : 2024-10-10 DOI: 10.1109/TVCG.2024.3453128
Ulrich Eck;Maki Sugimoto;Misha Sra;Markus Tatzgern;Jeanine Stefanucci;Ian Williams
In this special issue of IEEE Transactions on Visualization and Computer Graphics (TVCG), we are pleased to present the journal papers from the 23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2024), which will be held as a hybrid conference between October 21 and 25, 2024 in the Greater Seattle Area, USA. ISMAR continues the over twenty-year long tradition of IWAR, ISMR, and ISAR, and is the premier conference for Mixed and Augmented Reality in the world.
第 23 届 IEEE 混合与增强现实国际研讨会(ISMAR 2024)将于 2024 年 10 月 21 日至 25 日在美国大西雅图地区以混合会议的形式举行,在本期《IEEE 可视化与计算机图形学论文集》(TVCG)特刊中,我们将隆重推出该会议的期刊论文。ISMAR 延续了 IWAR、ISMR 和 ISAR 长达二十多年的传统,是全球混合与增强现实领域的顶级会议。
{"title":"Message from the ISMAR 2024 Science and Technology Program Chairs and TVCG Guest Editors","authors":"Ulrich Eck;Maki Sugimoto;Misha Sra;Markus Tatzgern;Jeanine Stefanucci;Ian Williams","doi":"10.1109/TVCG.2024.3453128","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3453128","url":null,"abstract":"In this special issue of IEEE Transactions on Visualization and Computer Graphics (TVCG), we are pleased to present the journal papers from the 23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2024), which will be held as a hybrid conference between October 21 and 25, 2024 in the Greater Seattle Area, USA. ISMAR continues the over twenty-year long tradition of IWAR, ISMR, and ISAR, and is the premier conference for Mixed and Augmented Reality in the world.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"vii-vii"},"PeriodicalIF":0.0,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713471","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE ISMAR 2024 - Paper Reviewers for Journal Papers IEEE ISMAR 2024 - 期刊论文审稿人
Pub Date : 2024-10-10 DOI: 10.1109/TVCG.2024.3453151
{"title":"IEEE ISMAR 2024 - Paper Reviewers for Journal Papers","authors":"","doi":"10.1109/TVCG.2024.3453151","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3453151","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"xii-xiii"},"PeriodicalIF":0.0,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713477","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on visualization and computer graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1