Pub Date : 2024-07-01Epub Date: 2024-08-20DOI: 10.1109/MCG.2024.3378171
Anran Qi, Takeo Igarashi
We address the problem of modifying a given well-designed 2-D sewing pattern to accommodate garment edits in the 3-D space. Existing methods usually adjust the sewing pattern by applying uniform flattening to the 3-D garment. The problems are twofold: first, it ignores local scaling of the 2-D sewing pattern such as shrinking ribs of cuffs; second, it does not respect the implicit design rules and conventions of the industry, such as the use of straight edges for simplicity and precision in sewing. To address those problems, we present a pattern adjustment method that considers the nonuniform local scaling of the 2-D sewing pattern by utilizing the intrinsic scale matrix. In addition, we preserve the original boundary shape by an as-original-as-possible geometric constraint when desirable. We build a prototype with a set of commonly used alteration operations and showcase the capability of our method via a number of alteration examples throughout the article.
{"title":"PerfectTailor: Scale-Preserving 2-D Pattern Adjustment Driven by 3-D Garment Editing.","authors":"Anran Qi, Takeo Igarashi","doi":"10.1109/MCG.2024.3378171","DOIUrl":"10.1109/MCG.2024.3378171","url":null,"abstract":"<p><p>We address the problem of modifying a given well-designed 2-D sewing pattern to accommodate garment edits in the 3-D space. Existing methods usually adjust the sewing pattern by applying uniform flattening to the 3-D garment. The problems are twofold: first, it ignores local scaling of the 2-D sewing pattern such as shrinking ribs of cuffs; second, it does not respect the implicit design rules and conventions of the industry, such as the use of straight edges for simplicity and precision in sewing. To address those problems, we present a pattern adjustment method that considers the nonuniform local scaling of the 2-D sewing pattern by utilizing the intrinsic scale matrix. In addition, we preserve the original boundary shape by an as-original-as-possible geometric constraint when desirable. We build a prototype with a set of commonly used alteration operations and showcase the capability of our method via a number of alteration examples throughout the article.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"126-132"},"PeriodicalIF":1.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140159498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/MCG.2024.3403299
Marcello A Carrozzino, Eleonora Lanfranco, Giuseppe Rignanese, Gianfranco Adornato, Massimo Bergamasco, Mike Potel
Virtual reality (VR) is increasingly employed in archaeology to showcase reconstructions of ancient sites to the general public, yet its utilization for professional purposes by archaeologists remains less common. To address this gap, we introduce a VR application specifically designed to streamline the storage and access of critical data for archaeological studies. This application provides experts with an immersive visualization of excavation sites and related information during the postexcavation analysis phase. The application interface facilitates direct interaction with 3-D models generated through photogrammetry and modeling techniques, enabling detailed examination of collected data and enhancing research activities. We applied this system to the case study of excavations at the Temple of Juno in Agrigento, Italy. In addition, we present the findings of a pilot user study involving archaeologists, which evaluates the effectiveness of immersive technologies for professionals in documenting, preserving, and exploring archaeological sites, while also driving potential future developments.
{"title":"Enhancing Archaeological Research Through Immersive Virtual Reality.","authors":"Marcello A Carrozzino, Eleonora Lanfranco, Giuseppe Rignanese, Gianfranco Adornato, Massimo Bergamasco, Mike Potel","doi":"10.1109/MCG.2024.3403299","DOIUrl":"https://doi.org/10.1109/MCG.2024.3403299","url":null,"abstract":"<p><p>Virtual reality (VR) is increasingly employed in archaeology to showcase reconstructions of ancient sites to the general public, yet its utilization for professional purposes by archaeologists remains less common. To address this gap, we introduce a VR application specifically designed to streamline the storage and access of critical data for archaeological studies. This application provides experts with an immersive visualization of excavation sites and related information during the postexcavation analysis phase. The application interface facilitates direct interaction with 3-D models generated through photogrammetry and modeling techniques, enabling detailed examination of collected data and enhancing research activities. We applied this system to the case study of excavations at the Temple of Juno in Agrigento, Italy. In addition, we present the findings of a pilot user study involving archaeologists, which evaluates the effectiveness of immersive technologies for professionals in documenting, preserving, and exploring archaeological sites, while also driving potential future developments.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"44 4","pages":"69-78"},"PeriodicalIF":1.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142009966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/MCG.2024.3406139
Fotis Liarokapis, Vaclav Milata, Jose Luis Ponton, Nuria Pelechano, Haris Zacharatos, Beatriz Sousa Santos, Alejandra J Magana, Rafael Bidarra
Recent developments in extended reality (XR) are already demonstrating the benefits of this technology in the educational sector. Unfortunately, educators may not be familiar with XR technology and may find it difficult to adopt this technology in their classrooms. This article presents the overall architecture and objectives of an EU-funded project dedicated to XR for education, called Extended Reality for Education (XR4ED). The goal of the project is to provide a platform, where educators will be able to build XR teaching experiences without the need to have programming or 3-D modeling expertise. The platform will provide the users with a marketplace to obtain, for example, 3-D models, avatars, and scenarios; graphical user interfaces to author new teaching environments; and communication channels to allow for collaborative virtual reality (VR). This article describes the platform and focuses on a key aspect of collaborative and social XR, which is the use of avatars. We show initial results on a) a marketplace which is used for populating educational content into XR environments, b) an intelligent augmented reality assistant that communicates between nonplayer characters and learners, and c) self-avatars providing nonverbal communication in collaborative VR.
{"title":"XR4ED: An Extended Reality Platform for Education.","authors":"Fotis Liarokapis, Vaclav Milata, Jose Luis Ponton, Nuria Pelechano, Haris Zacharatos, Beatriz Sousa Santos, Alejandra J Magana, Rafael Bidarra","doi":"10.1109/MCG.2024.3406139","DOIUrl":"https://doi.org/10.1109/MCG.2024.3406139","url":null,"abstract":"<p><p>Recent developments in extended reality (XR) are already demonstrating the benefits of this technology in the educational sector. Unfortunately, educators may not be familiar with XR technology and may find it difficult to adopt this technology in their classrooms. This article presents the overall architecture and objectives of an EU-funded project dedicated to XR for education, called Extended Reality for Education (XR4ED). The goal of the project is to provide a platform, where educators will be able to build XR teaching experiences without the need to have programming or 3-D modeling expertise. The platform will provide the users with a marketplace to obtain, for example, 3-D models, avatars, and scenarios; graphical user interfaces to author new teaching environments; and communication channels to allow for collaborative virtual reality (VR). This article describes the platform and focuses on a key aspect of collaborative and social XR, which is the use of avatars. We show initial results on a) a marketplace which is used for populating educational content into XR environments, b) an intelligent augmented reality assistant that communicates between nonplayer characters and learners, and c) self-avatars providing nonverbal communication in collaborative VR.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"44 4","pages":"79-88"},"PeriodicalIF":1.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142009968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/MCG.2024.3396617
Nina Rajcic, Bruce D Campbell, Francesca Samsel, Bruce D Campbell, Francesca Samsel
As a self-professed AI artist, Nina Rajcic presented an opportunity for us to explore a curiosity regarding how AI artists have been developing a process during an AI boon brought on by transformer and generative AI tools. Although her journey has been one of pursuing text as a creative output, the nature of transformers and diffusion suggested relevance to graphical outputs. The following interview did not disappoint in that pursuit.
{"title":"Nina Rajcic: Navigating Artificial Intelligence for a Meaningful Artistic Practice.","authors":"Nina Rajcic, Bruce D Campbell, Francesca Samsel, Bruce D Campbell, Francesca Samsel","doi":"10.1109/MCG.2024.3396617","DOIUrl":"https://doi.org/10.1109/MCG.2024.3396617","url":null,"abstract":"<p><p>As a self-professed AI artist, Nina Rajcic presented an opportunity for us to explore a curiosity regarding how AI artists have been developing a process during an AI boon brought on by transformer and generative AI tools. Although her journey has been one of pursuing text as a creative output, the nature of transformers and diffusion suggested relevance to graphical outputs. The following interview did not disappoint in that pursuit.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"44 4","pages":"133-139"},"PeriodicalIF":1.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142009967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-08-20DOI: 10.1109/MCG.2024.3426314
Alan Rychert, Maria Lujan Ganuza, Matias Nicolas Selzer
This work explores the integration of generative pretrained transformer (GPT), an AI language model developed by OpenAI, as an assistant in low-cost virtual escape games. The study focuses on the synergy between virtual reality (VR) and GPT, aiming to evaluate its performance in helping solve logical challenges within a specific context in the virtual environment while acting as a personalized assistant through voice interaction. The findings from user evaluations revealed both positive perceptions and limitations of GPT in addressing highly complex challenges, indicating its potential as a valuable tool for providing assistance and guidance in problem-solving situations. The study also identified areas for future improvement, including adjusting the difficulty of puzzles and enhancing GPT's contextual understanding. Overall, the research sheds light on the opportunities and challenges of integrating AI language models such as GPT in virtual gaming environments, offering insights for further advancements in this field.
{"title":"Integrating GPT as an Assistant for Low-Cost Virtual Reality Escape-Room Games.","authors":"Alan Rychert, Maria Lujan Ganuza, Matias Nicolas Selzer","doi":"10.1109/MCG.2024.3426314","DOIUrl":"10.1109/MCG.2024.3426314","url":null,"abstract":"<p><p>This work explores the integration of generative pretrained transformer (GPT), an AI language model developed by OpenAI, as an assistant in low-cost virtual escape games. The study focuses on the synergy between virtual reality (VR) and GPT, aiming to evaluate its performance in helping solve logical challenges within a specific context in the virtual environment while acting as a personalized assistant through voice interaction. The findings from user evaluations revealed both positive perceptions and limitations of GPT in addressing highly complex challenges, indicating its potential as a valuable tool for providing assistance and guidance in problem-solving situations. The study also identified areas for future improvement, including adjusting the difficulty of puzzles and enhancing GPT's contextual understanding. Overall, the research sheds light on the opportunities and challenges of integrating AI language models such as GPT in virtual gaming environments, offering insights for further advancements in this field.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"14-25"},"PeriodicalIF":1.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141592197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/MCG.2024.3383137
Francesca Samsel, W Alan Scott, Kenneth Moreland, Theresa-Marie Rhyne
ParaView is one of the most prominent software tools for scientific visualization used by scientists around the world. Color is a primary conduit to visually map data to its representation and, thus, enable investigation and interpretation of the data. Colormap selection has a significant impact on the data revealed; its design and selection is a critical aspect of scientific data visualization. A common choice for a user is the program's default colormap, so careful consideration of this default is consequential. Although the current default colormap in ParaView, a succession of hues from cool blue to warm red, has served the community well, research shows that more nuanced colormap configurations increase discriminability while maintaining other critical metrics. These findings inspire us to revisit and update the default colors in ParaView. Here we present a new ParaView default colormap, the criteria and methods of development, and example visualizations and analytic metrics.
{"title":"A New Default Colormap for ParaView.","authors":"Francesca Samsel, W Alan Scott, Kenneth Moreland, Theresa-Marie Rhyne","doi":"10.1109/MCG.2024.3383137","DOIUrl":"https://doi.org/10.1109/MCG.2024.3383137","url":null,"abstract":"<p><p>ParaView is one of the most prominent software tools for scientific visualization used by scientists around the world. Color is a primary conduit to visually map data to its representation and, thus, enable investigation and interpretation of the data. Colormap selection has a significant impact on the data revealed; its design and selection is a critical aspect of scientific data visualization. A common choice for a user is the program's default colormap, so careful consideration of this default is consequential. Although the current default colormap in ParaView, a succession of hues from cool blue to warm red, has served the community well, research shows that more nuanced colormap configurations increase discriminability while maintaining other critical metrics. These findings inspire us to revisit and update the default colors in ParaView. Here we present a new ParaView default colormap, the criteria and methods of development, and example visualizations and analytic metrics.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"44 4","pages":"150-160"},"PeriodicalIF":1.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142009964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-08-20DOI: 10.1109/MCG.2024.3419699
Gerardo Restrepo, Edmond C Prakash, Sarah E Dashti, Andres Castillo, Jhon Gomez, Luis Oviedo, Juan Floyd, Juan Aycardi, Joan Trejos, Jean Gonzalez, Martin V Sierra, Andres A Navarro-Newball
Learning space for children with different sensory needs, nowadays, can be interactive, multisensory experiences, designed collaboratively by 1) specialists in special-needs learning, 2) extended realities (XR) technologists, and 3) sensorial diverse children, to provide the motivation, challenge, and development of key skills. While traditional audio and visual sensors in XR are challenging for XR applications to meet the needs of visually and hearing impaired sensorial-diverse children, our research goes a step ahead by integrating sensory technologies including haptic, tactile, kinaesthetic, and olfactory feedback that was well received by the children. Our research also demonstrates the protocols for 1) development of a suite of XR-applications; 2) methods for experiments and evaluation; and 3) tangible improvements in XR learning experience. Our research considered and is in compliance with the ethical and social implications and has the necessary approval for accessibility, user safety, and privacy.
{"title":"Extended Realities for Sensorially Diverse Children.","authors":"Gerardo Restrepo, Edmond C Prakash, Sarah E Dashti, Andres Castillo, Jhon Gomez, Luis Oviedo, Juan Floyd, Juan Aycardi, Joan Trejos, Jean Gonzalez, Martin V Sierra, Andres A Navarro-Newball","doi":"10.1109/MCG.2024.3419699","DOIUrl":"10.1109/MCG.2024.3419699","url":null,"abstract":"<p><p>Learning space for children with different sensory needs, nowadays, can be interactive, multisensory experiences, designed collaboratively by 1) specialists in special-needs learning, 2) extended realities (XR) technologists, and 3) sensorial diverse children, to provide the motivation, challenge, and development of key skills. While traditional audio and visual sensors in XR are challenging for XR applications to meet the needs of visually and hearing impaired sensorial-diverse children, our research goes a step ahead by integrating sensory technologies including haptic, tactile, kinaesthetic, and olfactory feedback that was well received by the children. Our research also demonstrates the protocols for 1) development of a suite of XR-applications; 2) methods for experiments and evaluation; and 3) tangible improvements in XR learning experience. Our research considered and is in compliance with the ethical and social implications and has the necessary approval for accessibility, user safety, and privacy.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"26-39"},"PeriodicalIF":1.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-08-20DOI: 10.1109/MCG.2024.3426943
Gabriel Giraldo, Jean-Marie Normand, Myriam Servieres
In this article, we investigated the representation of wind in urban spaces through computational fluid dynamics simulations in virtual environments (VE). We compared wind perception (force and direction) as well as the sense of presence and embodiment in VE using different display technologies: head-mounted displays (HMD) and large screens, with or without an avatar. The tactile display was found to be most effective for detecting wind characteristics and enhancing presence and embodiment in virtual scenes, regardless of display type. Wind force and overall presence showed no significant differences between projection methods, but the perception of wind direction varied, which can be attributed to the head tracking of the HMD. In addition, gender differences emerged: females had a 7.42% higher presence on large screens, while males had a 23.13% higher presence with HMD (avatar present). These results highlight nuances in wind perception, the influence of technology, and gender differences in VE.
在这项研究中,我们通过虚拟环境(VE)中的计算流体动力学模拟,研究了城市空间中风的表现形式。我们使用不同的显示技术,比较了风的感知(力和方向)以及在虚拟环境中的存在感和体现感:头戴式显示器(HMD)和大屏幕,以及有无化身。研究发现,无论显示屏类型如何,触觉显示屏对检测风的特性以及增强虚拟场景中的临场感和体现感最为有效。风力和整体临场感在不同的投影方法之间没有明显差异,但对风向的感知却有所不同,这可能与 HMD 的头部跟踪有关。此外,还出现了性别差异:女性在大屏幕上的临场感要高出 7.42%,而男性在 HMD(头像在场)上的临场感要高出 23.13%。这些结果凸显了风感知的细微差别、技术的影响以及 VE 的性别差异。
{"title":"A Comparative Study Between a Large Screen and an HMD Using Wind Representations in Virtual Reality.","authors":"Gabriel Giraldo, Jean-Marie Normand, Myriam Servieres","doi":"10.1109/MCG.2024.3426943","DOIUrl":"10.1109/MCG.2024.3426943","url":null,"abstract":"<p><p>In this article, we investigated the representation of wind in urban spaces through computational fluid dynamics simulations in virtual environments (VE). We compared wind perception (force and direction) as well as the sense of presence and embodiment in VE using different display technologies: head-mounted displays (HMD) and large screens, with or without an avatar. The tactile display was found to be most effective for detecting wind characteristics and enhancing presence and embodiment in virtual scenes, regardless of display type. Wind force and overall presence showed no significant differences between projection methods, but the perception of wind direction varied, which can be attributed to the head tracking of the HMD. In addition, gender differences emerged: females had a 7.42% higher presence on large screens, while males had a 23.13% higher presence with HMD (avatar present). These results highlight nuances in wind perception, the influence of technology, and gender differences in VE.</p>","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"PP ","pages":"53-68"},"PeriodicalIF":1.7,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141728358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-21DOI: 10.1109/mcg.2024.3385048
Gary Singh
When it comes to data, humans should always remain in the loop. Hence the name, Dataloop, an AI development platform that helps companies in various industries create better AI applications and accelerate their workflow while retaining any human elements they might need, all via one modular platform that integrates data management, models, annotations, and human insights.
{"title":"In the Loop","authors":"Gary Singh","doi":"10.1109/mcg.2024.3385048","DOIUrl":"https://doi.org/10.1109/mcg.2024.3385048","url":null,"abstract":"When it comes to data, humans should always remain in the loop. Hence the name, Dataloop, an AI development platform that helps companies in various industries create better AI applications and accelerate their workflow while retaining any human elements they might need, all via one modular platform that integrates data management, models, annotations, and human insights.","PeriodicalId":55026,"journal":{"name":"IEEE Computer Graphics and Applications","volume":"80 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141502797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}