Pub Date : 2024-10-18DOI: 10.1109/TVCG.2024.3477413
Frank Billy Djupkep Dizeu, Michel Picard, Marc-Antoine Drouin, Jonathan Boisvert
Measuring the 3D shape of semi-transparent surfaces with projector-camera 3D scanners is a difficult task because these surfaces weakly reflect light in a diffuse manner, and transmit a large part of the incident light. The task is even harder in the presence of participating background surfaces. The two methods proposed in this paper use sinusoidal patterns, each with a frequency chosen in the frequency range allowed by the projection optics of the projector-camera system. They differ in the way in which the camera-projector correspondence map is established, as well as in the number of patterns and the processing time required. The first method utilizes the discrete Fourier transform, performed on the intensity signal measured at a camera pixel, to inventory projector columns illuminating directly and indirectly the scene point imaged by that pixel. The second method goes beyond discrete Fourier transform and achieves the same goal by fitting a proposed analytical model to the measured intensity signal. Once the one (camera pixel) to many (projector columns) correspondence is established, a surface continuity constraint is applied to extract the one to one correspondence map linked to the semi-transparent surface. This map is used to determine the 3D point cloud of the surface by triangulation. Experimental results demonstrate the performance (accuracy, reliability) achieved by the proposed methods.
{"title":"Multi-Frequency Nonlinear Methods for 3D Shape Measurement of Semi-Transparent Surfaces Using Projector-Camera Systems.","authors":"Frank Billy Djupkep Dizeu, Michel Picard, Marc-Antoine Drouin, Jonathan Boisvert","doi":"10.1109/TVCG.2024.3477413","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3477413","url":null,"abstract":"<p><p>Measuring the 3D shape of semi-transparent surfaces with projector-camera 3D scanners is a difficult task because these surfaces weakly reflect light in a diffuse manner, and transmit a large part of the incident light. The task is even harder in the presence of participating background surfaces. The two methods proposed in this paper use sinusoidal patterns, each with a frequency chosen in the frequency range allowed by the projection optics of the projector-camera system. They differ in the way in which the camera-projector correspondence map is established, as well as in the number of patterns and the processing time required. The first method utilizes the discrete Fourier transform, performed on the intensity signal measured at a camera pixel, to inventory projector columns illuminating directly and indirectly the scene point imaged by that pixel. The second method goes beyond discrete Fourier transform and achieves the same goal by fitting a proposed analytical model to the measured intensity signal. Once the one (camera pixel) to many (projector columns) correspondence is established, a surface continuity constraint is applied to extract the one to one correspondence map linked to the semi-transparent surface. This map is used to determine the 3D point cloud of the surface by triangulation. Experimental results demonstrate the performance (accuracy, reliability) achieved by the proposed methods.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-18DOI: 10.1109/TVCG.2024.3478852
Xipeng Chen, Guangrun Wang, Xiaogang Xu, Philip Torr, Liang Lin
We present a novel data-driven Parametric Linear Blend Skinning (PLBS) model meticulously crafted for generalized 3D garment dressing and animation. Previous data-driven methods are impeded by certain challenges including overreliance on human body modeling and limited adaptability across different garment shapes. Our method resolves these challenges via two goals: 1) Develop a model based on garment modeling rather than human body modeling. 2) Separately construct low-dimensional sub-spaces for modeling in-plane deformation (such as variation in garment shape and size) and out-of-plane deformation (such as deformation due to varied body size and motion). Therefore, we formulate garment deformation as a PLBS model controlled by canonical 3D garment mesh, vertex-based skinning weights and associated local patch transformation. Unlike traditional LBS models specialized for individual objects, PLBS model is capable of uniformly expressing varied garments and bodies, the in-plane deformation is encoded on the canonical 3D garment and the out-of-plane deformation is controlled by the local patch transformation. Besides, we propose novel 3D garment registration and skinning weight decomposition strategies to obtain adequate data to build PLBS model under different garment categories. Furthermore, we employ dynamic fine-tuning to complement high-frequency signals missing from LBS for unseen testing data. Experiments illustrate that our method is capable of modeling dynamics for loose-fitting garments, outperforming previous data-driven modeling methods using different sub-space modeling strategies. We showcase that our method can factorize and be generalized for varied body sizes, garment shapes, garment sizes and human motions under different garment categories.
{"title":"Parametric Linear Blend Skinning Model for Multiple-Shape 3D Garments.","authors":"Xipeng Chen, Guangrun Wang, Xiaogang Xu, Philip Torr, Liang Lin","doi":"10.1109/TVCG.2024.3478852","DOIUrl":"10.1109/TVCG.2024.3478852","url":null,"abstract":"<p><p>We present a novel data-driven Parametric Linear Blend Skinning (PLBS) model meticulously crafted for generalized 3D garment dressing and animation. Previous data-driven methods are impeded by certain challenges including overreliance on human body modeling and limited adaptability across different garment shapes. Our method resolves these challenges via two goals: 1) Develop a model based on garment modeling rather than human body modeling. 2) Separately construct low-dimensional sub-spaces for modeling in-plane deformation (such as variation in garment shape and size) and out-of-plane deformation (such as deformation due to varied body size and motion). Therefore, we formulate garment deformation as a PLBS model controlled by canonical 3D garment mesh, vertex-based skinning weights and associated local patch transformation. Unlike traditional LBS models specialized for individual objects, PLBS model is capable of uniformly expressing varied garments and bodies, the in-plane deformation is encoded on the canonical 3D garment and the out-of-plane deformation is controlled by the local patch transformation. Besides, we propose novel 3D garment registration and skinning weight decomposition strategies to obtain adequate data to build PLBS model under different garment categories. Furthermore, we employ dynamic fine-tuning to complement high-frequency signals missing from LBS for unseen testing data. Experiments illustrate that our method is capable of modeling dynamics for loose-fitting garments, outperforming previous data-driven modeling methods using different sub-space modeling strategies. We showcase that our method can factorize and be generalized for varied body sizes, garment shapes, garment sizes and human motions under different garment categories.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17DOI: 10.1109/TVCG.2024.3483070
Taylor A Doty, Jonathan W Kelly, Stephen B Gilbert, Michael C Dorneich
Cybersickness, or sickness induced by virtual reality (VR), negatively impacts the enjoyment and adoption of the technology. One method that has been used to reduce sickness is repeated exposure to VR, herein Cybersickness Abatement from Repeated Exposure (CARE). However, high sickness levels during repeated exposure may discourage some users from returning. Field of view (FOV) restriction reduces cybersickness by minimizing visual motion in the periphery, but also negatively affects the user's visual experience. This study explored whether CARE that occurs with FOV restriction generalizes to a full FOV experience. Participants played a VR game for up to 20 minutes. Those in the Repeated Exposure Condition played the same VR game on four separate days, experiencing FOV restriction during the first three days and no FOV restriction on the fourth day. Results indicated significant CARE with FOV restriction (Days 1-3). Further, cybersickness on Day 4, without FOV restriction, was significantly lower than that of participants in the Single Exposure Condition, who experienced the game without FOV restriction only on one day. The current findings show that significant CARE can occur while experiencing minimal cybersickness. Results are considered in the context of multiple theoretical explanations for CARE, including sensory rearrangement, adaptation, habituation, and postural control.
{"title":"Cybersickness Abatement from Repeated Exposure to VR with Reduced Discomfort.","authors":"Taylor A Doty, Jonathan W Kelly, Stephen B Gilbert, Michael C Dorneich","doi":"10.1109/TVCG.2024.3483070","DOIUrl":"10.1109/TVCG.2024.3483070","url":null,"abstract":"<p><p>Cybersickness, or sickness induced by virtual reality (VR), negatively impacts the enjoyment and adoption of the technology. One method that has been used to reduce sickness is repeated exposure to VR, herein Cybersickness Abatement from Repeated Exposure (CARE). However, high sickness levels during repeated exposure may discourage some users from returning. Field of view (FOV) restriction reduces cybersickness by minimizing visual motion in the periphery, but also negatively affects the user's visual experience. This study explored whether CARE that occurs with FOV restriction generalizes to a full FOV experience. Participants played a VR game for up to 20 minutes. Those in the Repeated Exposure Condition played the same VR game on four separate days, experiencing FOV restriction during the first three days and no FOV restriction on the fourth day. Results indicated significant CARE with FOV restriction (Days 1-3). Further, cybersickness on Day 4, without FOV restriction, was significantly lower than that of participants in the Single Exposure Condition, who experienced the game without FOV restriction only on one day. The current findings show that significant CARE can occur while experiencing minimal cybersickness. Results are considered in the context of multiple theoretical explanations for CARE, including sensory rearrangement, adaptation, habituation, and postural control.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-16DOI: 10.1109/TVCG.2024.3481354
Abdullah-Al-Raihan Nayeem, Dongyun Han, William J Tolone, Isaac Cho
Contour maps are an essential tool for exploring spatial features of the terrain, such as distance, directions, and surface gradient among the contour areas. User interactions in contour-based visualizations create approaches to visual analysis that are noticeably different from the perspective of human cognition. As such, various interactive approaches have been introduced to improve system usability and enhance human cognition for complex and large-scale spatial data exploration. However, what user interaction means for contour maps, its purpose, when to leverage, and design primitives have yet to be investigated in the context of analysis tasks. Therefore, further research is needed to better understand and quantify the potentials and benefits offered by user interactions in contour-based geospatial visualizations designed to support analytical tasks. In this paper, we present a contour-based interactive geospatial visualization designed for analytical tasks. We conducted a crowd-sourced user study (N=62) to examine the impact of interactive features on analysis using contour-based geospatial visualizations. Our results show that the interactive features aid in their data analysis and understanding in terms of spatial data extent, map layout, task complexity, and user expertise. Finally, we discuss our findings in-depth, which will serve as guidelines for future design and implementation of interactive features in support of case-specific analytical tasks on contour-based geospatial views.
{"title":"Evaluating Effectiveness of Interactivity in Contour-Based Geospatial Visualizations.","authors":"Abdullah-Al-Raihan Nayeem, Dongyun Han, William J Tolone, Isaac Cho","doi":"10.1109/TVCG.2024.3481354","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3481354","url":null,"abstract":"<p><p>Contour maps are an essential tool for exploring spatial features of the terrain, such as distance, directions, and surface gradient among the contour areas. User interactions in contour-based visualizations create approaches to visual analysis that are noticeably different from the perspective of human cognition. As such, various interactive approaches have been introduced to improve system usability and enhance human cognition for complex and large-scale spatial data exploration. However, what user interaction means for contour maps, its purpose, when to leverage, and design primitives have yet to be investigated in the context of analysis tasks. Therefore, further research is needed to better understand and quantify the potentials and benefits offered by user interactions in contour-based geospatial visualizations designed to support analytical tasks. In this paper, we present a contour-based interactive geospatial visualization designed for analytical tasks. We conducted a crowd-sourced user study (N=62) to examine the impact of interactive features on analysis using contour-based geospatial visualizations. Our results show that the interactive features aid in their data analysis and understanding in terms of spatial data extent, map layout, task complexity, and user expertise. Finally, we discuss our findings in-depth, which will serve as guidelines for future design and implementation of interactive features in support of case-specific analytical tasks on contour-based geospatial views.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Creating data videos that effectively narrate stories with animated visuals requires substantial effort and expertise. A promising research trend is leveraging the easy-to-use natural language (NL) interaction to automatically synthesize data video components from narrative content like text narrations, or NL commands that specify user-required designs. Nevertheless, previous research has overlooked the integration of narrative content and specific design authoring commands, leading to generated results that lack customization or fail to seamlessly fit into the narrative context. To address these issues, we introduce a novel paradigm for creating data videos, which seamlessly integrates users' authoring and narrative intents in a unified format called annotated narration, allowing users to incorporate NL commands for design authoring as inline annotations within the narration text. Informed by a formative study on users' preference for annotated narration, we develop a prototype system named Data Playwright that embodies this paradigm for effective creation of data videos. Within Data Playwright, users can write annotated narration based on uploaded visualizations. The system's interpreter automatically understands users' inputs and synthesizes data videos with narration-animation interplay, powered by large language models. Finally, users can preview and fine-tune the video. A user study demonstrated that participants can effectively create data videos with Data Playwright by effortlessly articulating their desired outcomes through annotated narration.
制作数据视频,用动画视觉效果有效地叙述故事,需要大量的精力和专业知识。一个很有前景的研究趋势是利用易于使用的自然语言(NL)交互方式,从文本叙述等叙述内容或指定用户所需设计的 NL 命令中自动合成数据视频组件。然而,以往的研究忽视了叙事内容与特定设计创作命令的整合,导致生成的结果缺乏定制性或无法无缝融入叙事语境。为了解决这些问题,我们引入了一种用于创建数据视频的新范例,这种范例将用户的创作意图和叙述意图无缝整合到一种称为注释叙述的统一格式中,允许用户将用于设计创作的 NL 命令作为内嵌注释纳入叙述文本中。通过对用户对注释式叙述的偏好进行形成性研究,我们开发了一个名为 Data Playwright 的原型系统,它体现了这种有效创建数据视频的范例。在 Data Playwright 中,用户可以根据上传的可视化内容编写注释旁白。系统的解释器会自动理解用户的输入,并在大型语言模型的支持下合成具有旁白-动画互动的数据视频。最后,用户可以预览和微调视频。一项用户研究表明,参与者可以使用 Data Playwright 毫不费力地通过注释旁白阐明他们所期望的结果,从而有效地创建数据视频。
{"title":"Data Playwright: Authoring Data Videos With Annotated Narration.","authors":"Leixian Shen, Haotian Li, Yun Wang, Tianqi Luo, Yuyu Luo, Huamin Qu","doi":"10.1109/TVCG.2024.3477926","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3477926","url":null,"abstract":"<p><p>Creating data videos that effectively narrate stories with animated visuals requires substantial effort and expertise. A promising research trend is leveraging the easy-to-use natural language (NL) interaction to automatically synthesize data video components from narrative content like text narrations, or NL commands that specify user-required designs. Nevertheless, previous research has overlooked the integration of narrative content and specific design authoring commands, leading to generated results that lack customization or fail to seamlessly fit into the narrative context. To address these issues, we introduce a novel paradigm for creating data videos, which seamlessly integrates users' authoring and narrative intents in a unified format called annotated narration, allowing users to incorporate NL commands for design authoring as inline annotations within the narration text. Informed by a formative study on users' preference for annotated narration, we develop a prototype system named Data Playwright that embodies this paradigm for effective creation of data videos. Within Data Playwright, users can write annotated narration based on uploaded visualizations. The system's interpreter automatically understands users' inputs and synthesizes data videos with narration-animation interplay, powered by large language models. Finally, users can preview and fine-tune the video. A user study demonstrated that participants can effectively create data videos with Data Playwright by effortlessly articulating their desired outcomes through annotated narration.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reconstructing CSG trees from CAD models is a critical subject in reverse engineering. While there have been notable advancements in CSG reconstruction, challenges persist in capturing geometric details and achieving efficiency. Additionally, since non-axis-aligned volumetric primitives cannot maintain coplanar characteristics due to discretization errors, existing Boolean operations often lead to zero-volume surfaces and suffer from topological errors during the CSG modeling process. To address these issues, we propose a novel workflow to achieve fast CSG reconstruction and reliable forward modeling. First, we employ feature removal and model subdivision techniques to decompose models into sub-components. This significantly expedites the reconstruction by simplifying the complexity of the models. Then, we introduce a more reasonable method for primitive generation and filtering, and utilize a size-related optimization approach to reconstruct CSG trees. By re-adding features as additional nodes in the CSG trees, our method not only preserves intricate details but also ensures the conciseness, semantic integrity, and editability of the resulting CSG tree. Finally, we develop a coplanar primitive discretization method that represents primitives as large planes and extracts the original triangles after intersection. We extend the classification of triangles and incorporate a coplanar-aware Boolean tree assessment technique, allowing us to achieve manifold and watertight modeling results without zero-volume surfaces, even in extreme degenerate cases. We demonstrate the superiority of our method over state-of-the-art approaches. Moreover, the reconstructed CSG trees generated by our method contain extensive semantic information, enabling diverse model editing tasks.
{"title":"FR-CSG: Fast and Reliable Modeling for Constructive Solid Geometry.","authors":"Jiaxi Chen, Zeyu Shen, Mingyang Zhao, Xiaohong Jia, Dong-Ming Yan, Wencheng Wang","doi":"10.1109/TVCG.2024.3481278","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3481278","url":null,"abstract":"<p><p>Reconstructing CSG trees from CAD models is a critical subject in reverse engineering. While there have been notable advancements in CSG reconstruction, challenges persist in capturing geometric details and achieving efficiency. Additionally, since non-axis-aligned volumetric primitives cannot maintain coplanar characteristics due to discretization errors, existing Boolean operations often lead to zero-volume surfaces and suffer from topological errors during the CSG modeling process. To address these issues, we propose a novel workflow to achieve fast CSG reconstruction and reliable forward modeling. First, we employ feature removal and model subdivision techniques to decompose models into sub-components. This significantly expedites the reconstruction by simplifying the complexity of the models. Then, we introduce a more reasonable method for primitive generation and filtering, and utilize a size-related optimization approach to reconstruct CSG trees. By re-adding features as additional nodes in the CSG trees, our method not only preserves intricate details but also ensures the conciseness, semantic integrity, and editability of the resulting CSG tree. Finally, we develop a coplanar primitive discretization method that represents primitives as large planes and extracts the original triangles after intersection. We extend the classification of triangles and incorporate a coplanar-aware Boolean tree assessment technique, allowing us to achieve manifold and watertight modeling results without zero-volume surfaces, even in extreme degenerate cases. We demonstrate the superiority of our method over state-of-the-art approaches. Moreover, the reconstructed CSG trees generated by our method contain extensive semantic information, enabling diverse model editing tasks.</p>","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142484146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-10DOI: 10.1109/TVCG.2024.3453150
{"title":"IEEE ISMAR 2024 Science & Technology Program Committee Members for Journal Papers","authors":"","doi":"10.1109/TVCG.2024.3453150","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3453150","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"ix-xi"},"PeriodicalIF":0.0,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713481","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-10DOI: 10.1109/TVCG.2024.3453128
Ulrich Eck;Maki Sugimoto;Misha Sra;Markus Tatzgern;Jeanine Stefanucci;Ian Williams
In this special issue of IEEE Transactions on Visualization and Computer Graphics (TVCG), we are pleased to present the journal papers from the 23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2024), which will be held as a hybrid conference between October 21 and 25, 2024 in the Greater Seattle Area, USA. ISMAR continues the over twenty-year long tradition of IWAR, ISMR, and ISAR, and is the premier conference for Mixed and Augmented Reality in the world.
{"title":"Message from the ISMAR 2024 Science and Technology Program Chairs and TVCG Guest Editors","authors":"Ulrich Eck;Maki Sugimoto;Misha Sra;Markus Tatzgern;Jeanine Stefanucci;Ian Williams","doi":"10.1109/TVCG.2024.3453128","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3453128","url":null,"abstract":"In this special issue of IEEE Transactions on Visualization and Computer Graphics (TVCG), we are pleased to present the journal papers from the 23rd IEEE International Symposium on Mixed and Augmented Reality (ISMAR 2024), which will be held as a hybrid conference between October 21 and 25, 2024 in the Greater Seattle Area, USA. ISMAR continues the over twenty-year long tradition of IWAR, ISMR, and ISAR, and is the premier conference for Mixed and Augmented Reality in the world.","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"vii-vii"},"PeriodicalIF":0.0,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713471","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-10DOI: 10.1109/TVCG.2024.3453151
{"title":"IEEE ISMAR 2024 - Paper Reviewers for Journal Papers","authors":"","doi":"10.1109/TVCG.2024.3453151","DOIUrl":"https://doi.org/10.1109/TVCG.2024.3453151","url":null,"abstract":"","PeriodicalId":94035,"journal":{"name":"IEEE transactions on visualization and computer graphics","volume":"30 11","pages":"xii-xiii"},"PeriodicalIF":0.0,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713477","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142430823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}