首页 > 最新文献

Computer Graphics Forum最新文献

英文 中文
Hierarchical Spherical Cross-Parameterization for Deforming Characters 变形字符的分层球形交叉参数化
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-19 DOI: 10.1111/cgf.15197
Lizhou Cao, Chao Peng

The demand for immersive technology and realistic virtual environments has created a need for automated solutions to generate characters with morphological variations. However, existing approaches either rely on manual labour or oversimplify the problem by limiting it to static meshes or deformation transfers without shape morphing. In this paper, we propose a new cross-parameterization approach that semi-automates the generation of morphologically diverse characters with synthesized articulations and animations. The main contribution of this work is that our approach parameterizes deforming characters into a novel hierarchical multi-sphere domain, while considering the attributes of mesh topology, deformation and animation. With such a multi-sphere domain, our approach minimizes parametric distortion rates, enhances the bijectivity of parameterization and aligns deforming feature correspondences. The alignment process we propose allows users to focus only on major joint pairs, which is much simpler and more intuitive than the existing alignment solutions that involve a manual process of identifying feature points on mesh surfaces. Compared to recent works, our approach achieves high-quality results in the applications of 3D morphing, texture transfer, character synthesis and deformation transfer.

对身临其境技术和逼真虚拟环境的需求催生了对自动解决方案的需求,以生成具有形态变化的角色。然而,现有的方法要么依赖人工,要么将问题过于简单化,仅限于静态网格或无形态变化的变形转移。在本文中,我们提出了一种新的交叉参数化方法,该方法可半自动生成具有合成关节和动画的形态各异的角色。这项工作的主要贡献在于,我们的方法将变形角色参数化为一个新颖的分层多球域,同时考虑到网格拓扑、变形和动画的属性。有了这样一个多球域,我们的方法就能最大限度地降低参数失真率,增强参数化的拟物性,并对齐变形特征的对应关系。我们提出的对齐过程让用户只需关注主要的关节对,这比现有的对齐解决方案更简单、更直观,因为现有的对齐解决方案需要手动识别网格表面的特征点。与近期的研究相比,我们的方法在三维变形、纹理转移、角色合成和变形转移等应用中取得了高质量的结果。
{"title":"Hierarchical Spherical Cross-Parameterization for Deforming Characters","authors":"Lizhou Cao,&nbsp;Chao Peng","doi":"10.1111/cgf.15197","DOIUrl":"10.1111/cgf.15197","url":null,"abstract":"<p>The demand for immersive technology and realistic virtual environments has created a need for automated solutions to generate characters with morphological variations. However, existing approaches either rely on manual labour or oversimplify the problem by limiting it to static meshes or deformation transfers without shape morphing. In this paper, we propose a new cross-parameterization approach that semi-automates the generation of morphologically diverse characters with synthesized articulations and animations. The main contribution of this work is that our approach parameterizes deforming characters into a novel hierarchical multi-sphere domain, while considering the attributes of mesh topology, deformation and animation. With such a multi-sphere domain, our approach minimizes parametric distortion rates, enhances the bijectivity of parameterization and aligns deforming feature correspondences. The alignment process we propose allows users to focus only on major joint pairs, which is much simpler and more intuitive than the existing alignment solutions that involve a manual process of identifying feature points on mesh surfaces. Compared to recent works, our approach achieves high-quality results in the applications of 3D morphing, texture transfer, character synthesis and deformation transfer.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep SVBRDF Acquisition and Modelling: A Survey 深度 SVBRDF 采集与建模:调查
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-16 DOI: 10.1111/cgf.15199
Behnaz Kavoosighafi, Saghi Hajisharif, Ehsan Miandji, Gabriel Baravdish, Wen Cao, Jonas Unger

Hand in hand with the rapid development of machine learning, deep learning and generative AI algorithms and architectures, the graphics community has seen a remarkable evolution of novel techniques for material and appearance capture. Typically, these machine-learning-driven methods and technologies, in contrast to traditional techniques, rely on only a single or very few input images, while enabling the recovery of detailed, high-quality measurements of bi-directional reflectance distribution functions, as well as the corresponding spatially varying material properties, also known as Spatially Varying Bi-directional Reflectance Distribution Functions (SVBRDFs). Learning-based approaches for appearance capture will play a key role in the development of new technologies that will exhibit a significant impact on virtually all domains of graphics. Therefore, to facilitate future research, this State-of-the-Art Report (STAR) presents an in-depth overview of the state-of-the-art in machine-learning-driven material capture in general, and focuses on SVBRDF acquisition in particular, due to its importance in accurately modelling complex light interaction properties of real-world materials. The overview includes a categorization of current methods along with a summary of each technique, an evaluation of their functionalities, their complexity in terms of acquisition requirements, computational aspects and usability constraints. The STAR is concluded by looking forward and summarizing open challenges in research and development toward predictive and general appearance capture in this field. A complete list of the methods and papers reviewed in this survey is available at computergraphics.on.liu.se/star_svbrdf_dl/.

随着机器学习、深度学习和生成式人工智能算法和架构的快速发展,图形学界也见证了材料和外观捕捉新技术的显著演变。与传统技术相比,这些机器学习驱动的方法和技术通常只依赖单张或极少数输入图像,同时能够恢复双向反射分布函数的详细、高质量测量值,以及相应的空间变化材料属性,也称为空间变化双向反射分布函数(SVBRDF)。基于学习的外观捕捉方法将在新技术开发中发挥关键作用,这些技术将对几乎所有图形领域产生重大影响。因此,为了促进未来的研究,本《最新进展报告》(STAR)对机器学习驱动的材料捕捉技术的最新进展进行了深入概述,并特别关注 SVBRDF 的获取,因为它在准确模拟真实世界材料的复杂光交互特性方面非常重要。概述包括对当前方法的分类、每种技术的概述、对其功能的评估、采集要求方面的复杂性、计算方面和可用性限制。最后,STAR 展望未来,总结了该领域在预测性和通用外观捕捉方面的研发挑战。本调查中评述的方法和论文的完整列表可在 computergraphics.on.liu.se/star_svbrdf_dl/ 上查阅。
{"title":"Deep SVBRDF Acquisition and Modelling: A Survey","authors":"Behnaz Kavoosighafi,&nbsp;Saghi Hajisharif,&nbsp;Ehsan Miandji,&nbsp;Gabriel Baravdish,&nbsp;Wen Cao,&nbsp;Jonas Unger","doi":"10.1111/cgf.15199","DOIUrl":"https://doi.org/10.1111/cgf.15199","url":null,"abstract":"<p>Hand in hand with the rapid development of machine learning, deep learning and generative AI algorithms and architectures, the graphics community has seen a remarkable evolution of novel techniques for material and appearance capture. Typically, these machine-learning-driven methods and technologies, in contrast to traditional techniques, rely on only a single or very few input images, while enabling the recovery of detailed, high-quality measurements of bi-directional reflectance distribution functions, as well as the corresponding spatially varying material properties, also known as Spatially Varying Bi-directional Reflectance Distribution Functions (SVBRDFs). Learning-based approaches for appearance capture will play a key role in the development of new technologies that will exhibit a significant impact on virtually all domains of graphics. Therefore, to facilitate future research, this State-of-the-Art Report (STAR) presents an in-depth overview of the state-of-the-art in machine-learning-driven material capture in general, and focuses on SVBRDF acquisition in particular, due to its importance in accurately modelling complex light interaction properties of real-world materials. The overview includes a categorization of current methods along with a summary of each technique, an evaluation of their functionalities, their complexity in terms of acquisition requirements, computational aspects and usability constraints. The STAR is concluded by looking forward and summarizing open challenges in research and development toward predictive and general appearance capture in this field. A complete list of the methods and papers reviewed in this survey is available at computergraphics.on.liu.se/star_svbrdf_dl/.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15199","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142320558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EBPVis: Visual Analytics of Economic Behavior Patterns in a Virtual Experimental Environment EBPVis:虚拟实验环境中经济行为模式的可视化分析
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-13 DOI: 10.1111/cgf.15200
Yuhua Liu, Yuming Ma, Qing Shi, Jin Wen, Wanjun Zheng, Xuanwu Yue, Hang Ye, Wei Chen, Yuwei Meng, Zhiguang Zhou

Experimental economics is an important branch of economics to study human behaviours in a controlled laboratory setting or out in the field. Scientific experiments are conducted in experimental economics to collect what decisions people make in specific circumstances and verify economic theories. As a significant couple of variables in the virtual experimental environment, decisions and outcomes change with the subjective factors of participants and objective circumstances, making it a difficult task to capture human behaviour patterns and establish correlations to verify economic theories. In this paper, we present a visual analytics system, EBPVis, which enables economists to visually explore human behaviour patterns and faithfully verify economic theories, e.g. the vicious cycle of poverty and poverty trap. We utilize a Doc2Vec model to transform the economic behaviours of participants into a vectorized space according to their sequential decisions, where frequent sequences can be easily perceived and extracted to represent human behaviour patterns. To explore the correlation between decisions and outcomes, an Outcome View is designed to display the outcome variables for behaviour patterns. We also provide a Comparison View to support an efficient comparison between multiple behaviour patterns by revealing their differences in terms of decision combinations and time-varying profits. Moreover, an Individual View is designed to illustrate the outcome accumulation and behaviour patterns of subjects. Case studies, expert feedback and user studies based on a real-world dataset have demonstrated the effectiveness and practicability of EBPVis in the representation of economic behaviour patterns and certification of economic theories.

实验经济学是经济学的一个重要分支,用于在受控实验室环境或实地研究人类行为。实验经济学通过科学实验来收集人们在特定情况下做出的决策,并验证经济理论。在虚拟实验环境中,决策和结果是一对重要的变量,会随着参与者的主观因素和客观环境的变化而变化,因此要捕捉人类行为模式并建立相关关系以验证经济理论是一项艰巨的任务。在本文中,我们提出了一个可视化分析系统 EBPVis,它能让经济学家直观地探索人类行为模式,并忠实地验证经济理论,例如贫困的恶性循环和贫困陷阱。我们利用 Doc2Vec 模型将参与者的经济行为根据其顺序决策转化为矢量化空间,在该空间中,频繁序列可以很容易地被感知和提取,以代表人类行为模式。为了探索决策与结果之间的相关性,我们设计了一个结果视图来显示行为模式的结果变量。我们还提供了 "比较视图",通过揭示决策组合和时变利润方面的差异,支持对多种行为模式进行有效比较。此外,我们还设计了一个 "个人视图",用于显示受试者的结果积累和行为模式。基于真实世界数据集的案例研究、专家反馈和用户研究证明了 EBPVis 在表示经济行为模式和认证经济理论方面的有效性和实用性。
{"title":"EBPVis: Visual Analytics of Economic Behavior Patterns in a Virtual Experimental Environment","authors":"Yuhua Liu,&nbsp;Yuming Ma,&nbsp;Qing Shi,&nbsp;Jin Wen,&nbsp;Wanjun Zheng,&nbsp;Xuanwu Yue,&nbsp;Hang Ye,&nbsp;Wei Chen,&nbsp;Yuwei Meng,&nbsp;Zhiguang Zhou","doi":"10.1111/cgf.15200","DOIUrl":"10.1111/cgf.15200","url":null,"abstract":"<p>Experimental economics is an important branch of economics to study human behaviours in a controlled laboratory setting or out in the field. Scientific experiments are conducted in experimental economics to collect what decisions people make in specific circumstances and verify economic theories. As a significant couple of variables in the virtual experimental environment, decisions and outcomes change with the subjective factors of participants and objective circumstances, making it a difficult task to capture human behaviour patterns and establish correlations to verify economic theories. In this paper, we present a visual analytics system, <i>EBPVis</i>, which enables economists to visually explore human behaviour patterns and faithfully verify economic theories, <i>e.g</i>. the vicious cycle of poverty and poverty trap. We utilize a Doc2Vec model to transform the economic behaviours of participants into a vectorized space according to their sequential decisions, where frequent sequences can be easily perceived and extracted to represent human behaviour patterns. To explore the correlation between decisions and outcomes, an Outcome View is designed to display the outcome variables for behaviour patterns. We also provide a Comparison View to support an efficient comparison between multiple behaviour patterns by revealing their differences in terms of decision combinations and time-varying profits. Moreover, an Individual View is designed to illustrate the outcome accumulation and behaviour patterns of subjects. Case studies, expert feedback and user studies based on a real-world dataset have demonstrated the effectiveness and practicability of <i>EBPVis</i> in the representation of economic behaviour patterns and certification of economic theories.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mix-Max: A Content-Aware Operator for Real-Time Texture Transitions Mix-Max:用于实时纹理转换的内容感知操作器
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-05 DOI: 10.1111/cgf.15193
Romain Fournier, Basile Sauvage

Mixing textures is a basic and ubiquitous operation in data-driven algorithms for real-time texture generation and rendering. It is usually performed either by linear blending, or by cutting. We propose a new mixing operator which encompasses and extends both, creating more complex transitions that adapt to the texture's contents. Our mixing operator takes as input two or more textures along with two or more priority maps, which encode how the texture patterns should interact. The resulting mixed texture is defined pixel-wise by selecting the maximum of both priorities. We show that it integrates smoothly into two widespread applications: transition between two different textures, and texture synthesis that mixes pieces of the same texture. We provide constant-time and parallel evaluation of the resulting mix over square footprints of MIP-maps, making our operator suitable for real-time rendering. We also develop a micro-priority model, inspired by micro-geometry models in rendering, which represents sub-pixel priorities by a statistical distribution, and which allows for tuning between sharp cuts and smooth blend.

在用于实时纹理生成和渲染的数据驱动算法中,纹理混合是一项基本而普遍的操作。它通常通过线性混合或切割来完成。我们提出了一种新的混合运算符,它包含并扩展了这两种运算符,能根据纹理内容创建更复杂的转换。我们的混合运算符将两个或多个纹理和两个或多个优先级映射作为输入,优先级映射编码了纹理图案的交互方式。通过选择两个优先级的最大值,按像素定义混合纹理。我们的研究表明,它能顺利地集成到两种广泛的应用中:两种不同纹理之间的转换,以及混合相同纹理片段的纹理合成。我们提供了在 MIP 地图的方形足迹上对混合结果进行恒定时间和并行评估的方法,使我们的运算器适用于实时渲染。我们还开发了一种微优先级模型,其灵感来自渲染中的微几何模型,该模型通过统计分布来表示子像素优先级,并允许在锐切和平滑混合之间进行调整。
{"title":"Mix-Max: A Content-Aware Operator for Real-Time Texture Transitions","authors":"Romain Fournier,&nbsp;Basile Sauvage","doi":"10.1111/cgf.15193","DOIUrl":"10.1111/cgf.15193","url":null,"abstract":"<p>Mixing textures is a basic and ubiquitous operation in data-driven algorithms for real-time texture generation and rendering. It is usually performed either by linear blending, or by cutting. We propose a new mixing operator which encompasses and extends both, creating more complex transitions that adapt to the texture's contents. Our mixing operator takes as input two or more textures along with two or more priority maps, which encode how the texture patterns should interact. The resulting mixed texture is defined pixel-wise by selecting the maximum of both priorities. We show that it integrates smoothly into two widespread applications: transition between two different textures, and texture synthesis that mixes pieces of the same texture. We provide constant-time and parallel evaluation of the resulting mix over square footprints of MIP-maps, making our operator suitable for real-time rendering. We also develop a micro-priority model, inspired by micro-geometry models in rendering, which represents sub-pixel priorities by a statistical distribution, and which allows for tuning between sharp cuts and smooth blend.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15193","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing Surface Voxelization for Triangular Meshes with Equidistant Scanlines and Gap Detection 利用等距扫描线和间隙检测优化三角形网格的表面体素化
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-04 DOI: 10.1111/cgf.15195
S. Delgado Díez, C. Cerrada Somolinos, S. R. Gómez Palomo

This paper presents an efficient algorithm for voxelizing the surface of triangular meshes in a single compute pass. The algorithm uses parallel equidistant lines to traverse the interior of triangles, minimizing costly memory operations and avoiding visiting the same voxels multiple times. By detecting and visiting only the voxels in each line operation, the proposed method achieves better performance results. This method incorporates a gap detection step, targeting areas where scanline-based voxelization methods might fail. By selectively addressing these gaps, our method attains superior performance outcomes. Additionally, the algorithm is written entirely in a single compute GLSL shader, which makes it highly portable and vendor independent. Its simplicity also makes it easy to adapt and extend for various applications. The paper compares the results of this algorithm with other modern methods, comprehensibly comparing the time performance and resources used. Additionally, we introduce a novel metric, the ‘Slope Consistency Value’, which quantifies triangle orientation's impact on voxelization accuracy for scanline-based approaches. The results show that the proposed solution outperforms existing, modern ones and obtains better results, especially in densely populated scenes with homogeneous triangle sizes and at higher resolutions.

本文提出了一种高效算法,可在单次计算中对三角形网格的表面进行体素化处理。该算法使用平行等距线遍历三角形内部,最大限度地减少了代价高昂的内存操作,并避免了对相同体素的多次访问。通过只检测和访问每次线操作中的体素,所提出的方法取得了更好的性能结果。该方法包含一个间隙检测步骤,针对基于扫描线的体素化方法可能失败的区域。通过有选择性地处理这些间隙,我们的方法取得了卓越的性能成果。此外,该算法完全由单个计算 GLSL 着色器编写而成,因此具有高度可移植性和独立于供应商的特点。它的简单性也使其易于适应和扩展各种应用。本文将该算法的结果与其他现代方法进行了比较,对时间性能和所用资源进行了全面的比较。此外,我们还引入了一个新的指标--"斜率一致性值",用于量化三角形方向对基于扫描线方法的体素化精度的影响。结果表明,所提出的解决方案优于现有的现代解决方案,并能获得更好的结果,尤其是在三角形大小均匀的密集场景和更高分辨率下。
{"title":"Optimizing Surface Voxelization for Triangular Meshes with Equidistant Scanlines and Gap Detection","authors":"S. Delgado Díez,&nbsp;C. Cerrada Somolinos,&nbsp;S. R. Gómez Palomo","doi":"10.1111/cgf.15195","DOIUrl":"10.1111/cgf.15195","url":null,"abstract":"<p>This paper presents an efficient algorithm for voxelizing the surface of triangular meshes in a single compute pass. The algorithm uses parallel equidistant lines to traverse the interior of triangles, minimizing costly memory operations and avoiding visiting the same voxels multiple times. By detecting and visiting only the voxels in each line operation, the proposed method achieves better performance results. This method incorporates a gap detection step, targeting areas where scanline-based voxelization methods might fail. By selectively addressing these gaps, our method attains superior performance outcomes. Additionally, the algorithm is written entirely in a single compute GLSL shader, which makes it highly portable and vendor independent. Its simplicity also makes it easy to adapt and extend for various applications. The paper compares the results of this algorithm with other modern methods, comprehensibly comparing the time performance and resources used. Additionally, we introduce a novel metric, the ‘Slope Consistency Value’, which quantifies triangle orientation's impact on voxelization accuracy for scanline-based approaches. The results show that the proposed solution outperforms existing, modern ones and obtains better results, especially in densely populated scenes with homogeneous triangle sizes and at higher resolutions.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15195","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ETBHD-HMF: A Hierarchical Multimodal Fusion Architecture for Enhanced Text-Based Hair Design ETBHD-HMF:用于增强基于文本的发型设计的分层多模态融合架构
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-03 DOI: 10.1111/cgf.15194
Rong He, Ge Jiao, Chen Li

Text-based hair design (TBHD) represents an innovative approach that utilizes text instructions for crafting hairstyle and colour, renowned for its flexibility and scalability. However, enhancing TBHD algorithms to improve generation quality and editing accuracy remains a current research difficulty. One important reason is that existing models fall short in alignment and fusion designs. Therefore, we propose a new layered multimodal fusion network called ETBHD-HMF, which decouples the input image and hair text information into layered hair colour and hairstyle representations. Within this network, the channel enhancement separation (CES) module is proposed to enhance important signals and suppress noise for text representation obtained from CLIP, thus improving generation quality. Based on this, we develop the weighted mapping fusion (WMF) sub-networks for hair colour and hairstyle. This sub-network applies the mapper operations to input image and text representations, acquiring joint information. The WMF then selectively merges image representation and joint information from various style layers using weighted operations, ultimately achieving fine-grained hairstyle designs. Additionally, to enhance editing accuracy and quality, we design a modality alignment loss to refine and optimize the information transmission and integration of the network. The experimental results of applying the network to the CelebA-HQ dataset demonstrate that our proposed model exhibits superior overall performance in terms of generation quality, visual realism, and editing accuracy. ETBHD-HMF (27.8 PSNR, 0.864 IDS) outperformed HairCLIP (26.9 PSNR, 0.828 IDS), with a 3% higher PSNR and a 4% higher IDS.

基于文本的发型设计(TBHD)是一种利用文本指令制作发型和颜色的创新方法,以其灵活性和可扩展性而闻名。然而,增强 TBHD 算法以提高生成质量和编辑准确性仍是当前研究的难点。其中一个重要原因是现有模型在对齐和融合设计方面存在不足。因此,我们提出了一种名为 ETBHD-HMF 的新型分层多模态融合网络,它将输入图像和头发文本信息解耦为分层的发色和发型表示。在该网络中,我们提出了通道增强分离(CES)模块,以增强重要信号并抑制从 CLIP 获取的文本表示的噪声,从而提高生成质量。在此基础上,我们为发色和发型开发了加权映射融合(WMF)子网络。该子网络将映射器操作应用于输入图像和文本表示,从而获取联合信息。然后,WMF 利用加权运算选择性地合并来自不同风格层的图像表示和联合信息,最终实现精细的发型设计。此外,为了提高编辑精度和质量,我们还设计了模态对齐损耗,以完善和优化网络的信息传输和整合。将该网络应用于 CelebA-HQ 数据集的实验结果表明,我们提出的模型在生成质量、视觉逼真度和编辑准确性方面都表现出了卓越的整体性能。ETBHD-HMF(27.8 PSNR,0.864 IDS)优于 HairCLIP(26.9 PSNR,0.828 IDS),PSNR 高出 3%,IDS 高出 4%。
{"title":"ETBHD-HMF: A Hierarchical Multimodal Fusion Architecture for Enhanced Text-Based Hair Design","authors":"Rong He,&nbsp;Ge Jiao,&nbsp;Chen Li","doi":"10.1111/cgf.15194","DOIUrl":"10.1111/cgf.15194","url":null,"abstract":"<p>Text-based hair design (TBHD) represents an innovative approach that utilizes text instructions for crafting hairstyle and colour, renowned for its flexibility and scalability. However, enhancing TBHD algorithms to improve generation quality and editing accuracy remains a current research difficulty. One important reason is that existing models fall short in alignment and fusion designs. Therefore, we propose a new layered multimodal fusion network called ETBHD-HMF, which decouples the input image and hair text information into layered hair colour and hairstyle representations. Within this network, the channel enhancement separation (CES) module is proposed to enhance important signals and suppress noise for text representation obtained from CLIP, thus improving generation quality. Based on this, we develop the weighted mapping fusion (WMF) sub-networks for hair colour and hairstyle. This sub-network applies the mapper operations to input image and text representations, acquiring joint information. The WMF then selectively merges image representation and joint information from various style layers using weighted operations, ultimately achieving fine-grained hairstyle designs. Additionally, to enhance editing accuracy and quality, we design a modality alignment loss to refine and optimize the information transmission and integration of the network. The experimental results of applying the network to the CelebA-HQ dataset demonstrate that our proposed model exhibits superior overall performance in terms of generation quality, visual realism, and editing accuracy. ETBHD-HMF (27.8 PSNR, 0.864 IDS) outperformed HairCLIP (26.9 PSNR, 0.828 IDS), with a 3% higher PSNR and a 4% higher IDS.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Directional Texture Editing for 3D Models 三维模型的定向纹理编辑
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-09-02 DOI: 10.1111/cgf.15196
Shengqi Liu, Zhuo Chen, Jingnan Gao, Yichao Yan, Wenhan Zhu, Jiangjing Lyu, Xiaokang Yang

Texture editing is a crucial task in 3D modelling that allows users to automatically manipulate the surface materials of 3D models. However, the inherent complexity of 3D models and the ambiguous text description lead to the challenge of this task. To tackle this challenge, we propose ITEM3D, a Texture Editing Model designed for automatic 3D object editing according to the text Instructions. Leveraging the diffusion models and the differentiable rendering, ITEM3D takes the rendered images as the bridge between text and 3D representation and further optimizes the disentangled texture and environment map. Previous methods adopted the absolute editing direction, namely score distillation sampling (SDS) as the optimization objective, which unfortunately results in noisy appearances and text inconsistencies. To solve the problem caused by the ambiguous text, we introduce a relative editing direction, an optimization objective defined by the noise difference between the source and target texts, to release the semantic ambiguity between the texts and images. Additionally, we gradually adjust the direction during optimization to further address the unexpected deviation in the texture domain. Qualitative and quantitative experiments show that our ITEM3D outperforms the state-of-the-art methods on various 3D objects. We also perform text-guided relighting to show explicit control over lighting. Our project page: https://shengqiliu1.github.io/ITEM3D/.

纹理编辑是三维建模中的一项重要任务,它允许用户自动操作三维模型的表面材料。然而,三维模型固有的复杂性和模棱两可的文字说明给这项任务带来了挑战。为了应对这一挑战,我们提出了 ITEM3D,这是一种纹理编辑模型,用于根据文本说明自动编辑三维对象。ITEM3D 利用扩散模型和可微分渲染技术,将渲染图像作为文本和三维表示之间的桥梁,并进一步优化分离的纹理和环境贴图。以往的方法采用绝对编辑方向,即分数蒸馏采样(SDS)作为优化目标,但不幸的是,这种方法会导致噪点出现和文本不一致。为了解决文本模糊带来的问题,我们引入了相对编辑方向,即以源文本和目标文本之间的噪声差为优化目标,以消除文本和图像之间的语义模糊。此外,我们还在优化过程中逐步调整方向,以进一步解决纹理域中的意外偏差。定性和定量实验表明,我们的 ITEM3D 在各种三维物体上的表现优于最先进的方法。我们还进行了文本引导的重新照明,以显示对照明的明确控制。我们的项目页面:https://shengqiliu1.github.io/ITEM3D/。
{"title":"Directional Texture Editing for 3D Models","authors":"Shengqi Liu,&nbsp;Zhuo Chen,&nbsp;Jingnan Gao,&nbsp;Yichao Yan,&nbsp;Wenhan Zhu,&nbsp;Jiangjing Lyu,&nbsp;Xiaokang Yang","doi":"10.1111/cgf.15196","DOIUrl":"10.1111/cgf.15196","url":null,"abstract":"<p>Texture editing is a crucial task in 3D modelling that allows users to automatically manipulate the surface materials of 3D models. However, the inherent complexity of 3D models and the ambiguous text description lead to the challenge of this task. To tackle this challenge, we propose ITEM3D, a <b>T</b>exture <b>E</b>diting <b>M</b>odel designed for automatic <b>3D</b> object editing according to the text <b>I</b>nstructions. Leveraging the diffusion models and the differentiable rendering, ITEM3D takes the rendered images as the bridge between text and 3D representation and further optimizes the disentangled texture and environment map. Previous methods adopted the absolute editing direction, namely score distillation sampling (SDS) as the optimization objective, which unfortunately results in noisy appearances and text inconsistencies. To solve the problem caused by the ambiguous text, we introduce a relative editing direction, an optimization objective defined by the noise difference between the source and target texts, to release the semantic ambiguity between the texts and images. Additionally, we gradually adjust the direction during optimization to further address the unexpected deviation in the texture domain. Qualitative and quantitative experiments show that our ITEM3D outperforms the state-of-the-art methods on various 3D objects. We also perform text-guided relighting to show explicit control over lighting. Our project page: https://shengqiliu1.github.io/ITEM3D/.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Row–Column Separated Attention Based Low-Light Image/Video Enhancement 基于行列分离注意力的低照度图像/视频增强技术
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-29 DOI: 10.1111/cgf.15192
Chengqi Dong, Zhiyuan Cao, Tuoshi Qi, Kexin Wu, Yixing Gao, Fan Tang

U-Net structure is widely used for low-light image/video enhancement. The enhanced images result in areas with large local noise and loss of more details without proper guidance for global information. Attention mechanisms can better focus on and use global information. However, attention to images could significantly increase the number of parameters and computations. We propose a Row–Column Separated Attention module (RCSA) inserted after an improved U-Net. The RCSA module's input is the mean and maximum of the row and column of the feature map, which utilizes global information to guide local information with fewer parameters. We propose two temporal loss functions to apply the method to low-light video enhancement and maintain temporal consistency. Extensive experiments on the LOL, MIT Adobe FiveK image, and SDSD video datasets demonstrate the effectiveness of our approach.

U-Net 结构被广泛应用于低照度图像/视频增强。增强后的图像会产生大量局部噪点,并且由于没有全局信息的正确引导,会丢失更多细节。注意力机制可以更好地关注和利用全局信息。然而,对图像的关注可能会大大增加参数和计算的数量。我们建议在改进的 U-Net 之后插入行列分离注意力模块(RCSA)。RCSA 模块的输入是特征图的行和列的平均值和最大值,它利用全局信息指导局部信息,参数较少。我们提出了两个时间损失函数,将该方法应用于低照度视频增强并保持时间一致性。在 LOL、麻省理工学院 Adobe FiveK 图像和 SDSD 视频数据集上进行的大量实验证明了我们方法的有效性。
{"title":"Row–Column Separated Attention Based Low-Light Image/Video Enhancement","authors":"Chengqi Dong,&nbsp;Zhiyuan Cao,&nbsp;Tuoshi Qi,&nbsp;Kexin Wu,&nbsp;Yixing Gao,&nbsp;Fan Tang","doi":"10.1111/cgf.15192","DOIUrl":"10.1111/cgf.15192","url":null,"abstract":"<p>U-Net structure is widely used for low-light image/video enhancement. The enhanced images result in areas with large local noise and loss of more details without proper guidance for global information. Attention mechanisms can better focus on and use global information. However, attention to images could significantly increase the number of parameters and computations. We propose a Row–Column Separated Attention module (RCSA) inserted after an improved U-Net. The RCSA module's input is the mean and maximum of the row and column of the feature map, which utilizes global information to guide local information with fewer parameters. We propose two temporal loss functions to apply the method to low-light video enhancement and maintain temporal consistency. Extensive experiments on the LOL, MIT Adobe FiveK image, and SDSD video datasets demonstrate the effectiveness of our approach.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142177009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Entropy-driven Progressive Compression of 3D Point Clouds 三维点云的熵驱动渐进压缩
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-22 DOI: 10.1111/cgf.15130
A. Zampieri, G. Delarue, N. Abou Bakr, P. Alliez

3D point clouds stand as one of the prevalent representations for 3D data, offering the advantage of closely aligning with sensing technologies and providing an unbiased representation of a measured physical scene. Progressive compression is required for real-world applications operating on networked infrastructures with restricted or variable bandwidth. We contribute a novel approach that leverages a recursive binary space partition, where the partitioning planes are not necessarily axis-aligned and optimized via an entropy criterion. The planes are encoded via a novel adaptive quantization method combined with prediction. The input 3D point cloud is encoded as an interlaced stream of partitioning planes and number of points in the cells of the partition. Compared to previous work, the added value is an improved rate-distortion performance, especially for very low bitrates. The latter are critical for interactive navigation of large 3D point clouds on heterogeneous networked infrastructures.

三维点云是三维数据的常用表示方法之一,具有与传感技术密切配合的优势,并能对测量的物理场景进行无偏见的表示。在带宽受限或可变的网络基础设施上运行的实际应用需要渐进压缩。我们提出了一种利用递归二进制空间分区的新方法,其中分区平面不一定是轴对齐的,而是通过熵标准进行优化。平面通过一种结合预测的新型自适应量化方法进行编码。输入的三维点云被编码为分割平面的交错流和分割单元中的点数。与之前的工作相比,该技术的附加值在于提高了速率-失真性能,尤其是在比特率非常低的情况下。后者对于在异构网络基础设施上对大型三维点云进行交互式导航至关重要。
{"title":"Entropy-driven Progressive Compression of 3D Point Clouds","authors":"A. Zampieri,&nbsp;G. Delarue,&nbsp;N. Abou Bakr,&nbsp;P. Alliez","doi":"10.1111/cgf.15130","DOIUrl":"https://doi.org/10.1111/cgf.15130","url":null,"abstract":"<p>3D point clouds stand as one of the prevalent representations for 3D data, offering the advantage of closely aligning with sensing technologies and providing an unbiased representation of a measured physical scene. Progressive compression is required for real-world applications operating on networked infrastructures with restricted or variable bandwidth. We contribute a novel approach that leverages a recursive binary space partition, where the partitioning planes are not necessarily axis-aligned and optimized via an entropy criterion. The planes are encoded via a novel adaptive quantization method combined with prediction. The input 3D point cloud is encoded as an interlaced stream of partitioning planes and number of points in the cells of the partition. Compared to previous work, the added value is an improved rate-distortion performance, especially for very low bitrates. The latter are critical for interactive navigation of large 3D point clouds on heterogeneous networked infrastructures.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142041549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Front Matter 前言
IF 2.7 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-22 DOI: 10.1111/cgf.15144

Massachusetts Institute of Technology, Cambridge, MA, USA

June 24 – 26, 2024

Conference Co-Chairs

Justin Solomon, MIT

Mina Konaković Luković, MIT

Technical Program Co-Chairs

Ruizhen Hu, Shenzhen University

Sylvain Lefebvre, INRIA

Graduate School Co-Chairs

Silvia Sellán, University of Toronto

Edward Chien, Boston University

Steering Committee

Leif Kobbelt, RWTH Aachen University, DE

Marc Alexa, Technische Universität Berlin, DE

Pierre Alliez, INRIA, FR

Mirela Ben-Chen, Technion-IIT, IL

Hui Huang, Shenzhen University, CN

Niloy Mitra, University College London, GB

Daniele Panozzo, New York University, US

Alexa, Marc

TU Berlin, DE

Alliez, Pierre

Inria Sophia Antipolis, FR

Bærentzen, Jakob Andreas

Technical University of Denmark, DK

Belyaev, Alexander

Heriot-Watt University, GB

Ben-Chen, Mirela

Technion - Israel Institute of Technology, IL

Benes, Bedrich

Purdue University, US

Bommes, David

University of Bern, CH

Bonneel, Nicolas

CNRS, Université Lyon, FR

Botsch, Mario

TU Dortmund, DE

Boubekeur, Tamy

Adobe Research, FR

Campen, Marcel

Osnabrück University, DE

Chaine, Raphaelle

LIRIS CNRS, Université Lyon 1, FR

Chen, Renjie

University of Science and Technology of China, CN

Chen, Zhonggui

Xiamen University, CN

Chien, Edward

Boston University, US

Cignoni, Paolo

ISTI - CNR, IT

Cohen-Steiner, David

INRIA, FR

Desbrun, Mathieu

Inria / Ecole Polytechnique, FR

Dey, Tamal

Purdue University, US

Digne, Julie

LIRIS - CNRS, FR

Fu, Xiao-Ming

USTC, CN

Gao, Xifeng

Tencent America, US

Gingold, Yotam

George Mason University, US

Giorgi, Daniela

National Research Council of Italy, IT

Guerrero, Paul

Adobe Research, US

Herholz, Philipp

ETH Zurich, CH

Hildebrandt, Klaus

TU Delft, NL

Hoppe, Hugues

Independent Researcher, US

Hormann, Kai

Università della Svizzera italiana, CH

Huang, Jin

Zhejiang University, CN

Huang, Qixing

The University of Texas at Austin, US

Jacobson, Alec

University of Toronto and Adobe Research, CA

Ju, Tao

Washington University in St. Louis, US

Kazhdan, Misha

Johns Hopkins University, US

Keyser, John

Texas A & M University, US

Kim, Vladimir

Adobe, US

Kobbelt, Leif

RWTH Aachen University, DE

Kosinka, Jiri

Bern

美国马萨诸塞州剑桥市麻省理工学院2024年6月24-26日会议联合主席Justin Solomon,麻省理工学院Mina Konaković Luković,麻省理工学院技术项目联合主席Ruizhen Hu,深圳大学Sylvain Lefebvre、指导委员会Leif Kobbelt, RWTH Aachen University, DEMarc Alexa, Technische Universität Berlin, DEPierre Alliez, INRIA, FRMirela Ben-Chen, Technion-IIT、ILHui Huang, Shenzhen University, CNNiloy Mitra, University College London, GBDaniele Panozzo, New York University, USAlexa, MarcTU Berlin, DEAlliez, PierreInria Sophia Antipolis, FRBærentzen, Jakob AndreasTechnical University of Denmark、DKBelyaev, AlexanderHeriot-Watt University, GBBen-Chen, MirelaTechnion - Israel Institute of Technology, ILBenes, BedrichPurdue University, USBommes, DavidUniversity of Bern, CHBonneel, NicolasCNRS, Université Lyon, FRBotsch、MarioTU Dortmund, DEBoubekeur, TamyAdobe Research, FRCampen, MarcelOsnabrück University, DEChaine, RaphaelleLIRIS CNRS, Université Lyon 1, FRChen, RenjieUniversity of Science and Technology of China, CNChen, ZhongguiXiamen University、CNChien, EdwardBoston University, USCignoni, PaoloISTI - CNR, ITCohen-Steiner, DavidINRIA, FRDesbrun, MathieuInria / Ecole Polytechnique, FRDey, TamalPurdue University, USDigne, JulieLIRIS - CNRS, FRFu, Xiao-MingUSTC, CNGao、XifengTencent America, USGingold, YotamGeorge Mason University, USGiorgi, DanielaNational Research Council of Italy, ITGuerrero, PaulAdobe Research, USHerholz, PhilippETH Zurich, CHHildebrandt, KlausTU Delft, NLHoppe, HuguesIndependent Researcher、USHormann, KaiUniversità della Svizzera italiana, CHHuang, JinZhejiang University, CNHuang, QixingThe University of Texas at Austin, USJacobson, AlecUniversity of Toronto and Adobe Research, CAJu, TaoWashington University in St.Louis,USKazhdan,MishaJohns Hopkins University,USKeyser,JohnTexas A &;M University, USKim, VladimirAdobe, USKobbelt, LeifRWTH Aachen University, DEKosinka, JiriBernoulli Institute, University of Groningen, NLLai, Yu-KunCardiff University, GBLi, LeiTechnical University of Munich, DELim, IsaakRWTH Aachen University, DELiu、YangMicrosoft Research Asia, CNLivesu, MarcoIMATI CNR, ITMa, RuiJilin University, CNMahmoud, AhmedUniversity of California, Davis, USMalomo, LuigiISTI - CNR, ITMellado, NicolasCNRS, IRIT, Université de Toulouse, FRMelzi, SimoneUniversity of Milano-Bicocca, ITMusialski、PrzemyslawNew Jersey Institute of Technology, USOvsjanikov, MaksEcole Polytechnique, FRPanetta, JulianUniversity of California, Davis, USPanozzo, DanieleNYU, USPatane, GiuseppeCNR-IMATI, ITPeng, SidaZhejiang University, CNPoranne, RoiUniversity of Haifa, ILPreiner、ReinholdGraz University of Technology, ATPuppo, EnricoUniversity of Genoa, ITRen, JingETH Zurich, CHRodola, EmanueleSapienza University of Rome, ITRumpf, MartinBonn University, DESacht, LeonardoUniversidade Federal de Santa Catarina, BRSchaefer, ScottTexas A &;M University, USSchneider, TeseoUniversity of Victoria, CASchröder, PeterCaltech, USSellán, SilviaUniversity of Toronto, CASharp, NicholasNVIDIA, CASmirnov, DmitriyNetflix, USSong, PengSingapore University of Tech
{"title":"Front Matter","authors":"","doi":"10.1111/cgf.15144","DOIUrl":"https://doi.org/10.1111/cgf.15144","url":null,"abstract":"<p>Massachusetts Institute of Technology, Cambridge, MA, USA</p><p>June 24 – 26, 2024</p><p><b>Conference Co-Chairs</b></p><p>Justin Solomon, MIT</p><p>Mina Konaković Luković, MIT</p><p><b>Technical Program Co-Chairs</b></p><p>Ruizhen Hu, Shenzhen University</p><p>Sylvain Lefebvre, INRIA</p><p><b>Graduate School Co-Chairs</b></p><p>Silvia Sellán, University of Toronto</p><p>Edward Chien, Boston University</p><p><b>Steering Committee</b></p><p>Leif Kobbelt, RWTH Aachen University, DE</p><p>Marc Alexa, Technische Universität Berlin, DE</p><p>Pierre Alliez, INRIA, FR</p><p>Mirela Ben-Chen, Technion-IIT, IL</p><p>Hui Huang, Shenzhen University, CN</p><p>Niloy Mitra, University College London, GB</p><p>Daniele Panozzo, New York University, US</p><p><b>Alexa, Marc</b></p><p>TU Berlin, DE</p><p><b>Alliez, Pierre</b></p><p>Inria Sophia Antipolis, FR</p><p><b>Bærentzen, Jakob Andreas</b></p><p>Technical University of Denmark, DK</p><p><b>Belyaev, Alexander</b></p><p>Heriot-Watt University, GB</p><p><b>Ben-Chen, Mirela</b></p><p>Technion - Israel Institute of Technology, IL</p><p><b>Benes, Bedrich</b></p><p>Purdue University, US</p><p><b>Bommes, David</b></p><p>University of Bern, CH</p><p><b>Bonneel, Nicolas</b></p><p>CNRS, Université Lyon, FR</p><p><b>Botsch, Mario</b></p><p>TU Dortmund, DE</p><p><b>Boubekeur, Tamy</b></p><p>Adobe Research, FR</p><p><b>Campen, Marcel</b></p><p>Osnabrück University, DE</p><p><b>Chaine, Raphaelle</b></p><p>LIRIS CNRS, Université Lyon 1, FR</p><p><b>Chen, Renjie</b></p><p>University of Science and Technology of China, CN</p><p><b>Chen, Zhonggui</b></p><p>Xiamen University, CN</p><p><b>Chien, Edward</b></p><p>Boston University, US</p><p><b>Cignoni, Paolo</b></p><p>ISTI - CNR, IT</p><p><b>Cohen-Steiner, David</b></p><p>INRIA, FR</p><p><b>Desbrun, Mathieu</b></p><p>Inria / Ecole Polytechnique, FR</p><p><b>Dey, Tamal</b></p><p>Purdue University, US</p><p><b>Digne, Julie</b></p><p>LIRIS - CNRS, FR</p><p><b>Fu, Xiao-Ming</b></p><p>USTC, CN</p><p><b>Gao, Xifeng</b></p><p>Tencent America, US</p><p><b>Gingold, Yotam</b></p><p>George Mason University, US</p><p><b>Giorgi, Daniela</b></p><p>National Research Council of Italy, IT</p><p><b>Guerrero, Paul</b></p><p>Adobe Research, US</p><p><b>Herholz, Philipp</b></p><p>ETH Zurich, CH</p><p><b>Hildebrandt, Klaus</b></p><p>TU Delft, NL</p><p><b>Hoppe, Hugues</b></p><p>Independent Researcher, US</p><p><b>Hormann, Kai</b></p><p>Università della Svizzera italiana, CH</p><p><b>Huang, Jin</b></p><p>Zhejiang University, CN</p><p><b>Huang, Qixing</b></p><p>The University of Texas at Austin, US</p><p><b>Jacobson, Alec</b></p><p>University of Toronto and Adobe Research, CA</p><p><b>Ju, Tao</b></p><p>Washington University in St. Louis, US</p><p><b>Kazhdan, Misha</b></p><p>Johns Hopkins University, US</p><p><b>Keyser, John</b></p><p>Texas A &amp; M University, US</p><p><b>Kim, Vladimir</b></p><p>Adobe, US</p><p><b>Kobbelt, Leif</b></p><p>RWTH Aachen University, DE</p><p><b>Kosinka, Jiri</b></p><p>Bern","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":null,"pages":null},"PeriodicalIF":2.7,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.15144","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142041552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Graphics Forum
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1