<p>The 32nd Pacific Conference on Computer Graphics and Applications</p><p>Huangshan (Yellow Mountain), China</p><p>October 13 – 16, 2024</p><p><b>Conference Co-Chairs</b></p><p>Jan Bender, RWTH Aachen, Germany</p><p>Ligang Liu, University of Science and Technology of China, China</p><p>Denis Zorin, New York University, USA</p><p><b>Program Co-Chairs</b></p><p>Renjie Chen, University of Science and Technology of China, China</p><p>Tobias Ritschel, University College London, UK</p><p>Emily Whiting, Boston University, USA</p><p><b>Organization Co-Chairs</b></p><p>Xiao-Ming Fu, University of Science and Technology of China, China</p><p>Jianwei Hu, Huangshan University, China</p><p>The 2024 Pacific Graphics Conference, held in the scenic city of Huangshan, China from October 13-16, marked a milestone year with record-breaking participation and submissions. As one of the premier forums for computer graphics research, the conference maintained its high standards of academic excellence while taking measures to handle unprecedented submission volumes.</p><p>This year saw an extraordinary 360 full paper submissions, the highest in Pacific Graphics history. To maintain our rigorous review standards, we implemented a streamlined process including an initial sorting committee and desk review phase. Of the 305 submissions that proceeded to full review, each received a minimum of 3 reviews, with an average of 3.76 reviews per submission. Our double-blind review process was managed by an International Program Committee (IPC) comprising 112 experts, carefully selected to ensure regular renewal of perspectives in the field.</p><p>In the review process, each submission was assigned to two IPC members as primary and secondary reviewers. These reviewers, in turn, invited two additional tertiary reviewers, ensuring comprehensive evaluation. Authors were provided a five-day window to submit 1,000-word rebuttals addressing reviewer comments and potential misunderstandings. This year's IPC meeting was conducted virtually over one week through asynchronous discussions.</p><p>From the initial 360 submissions, 109 papers were conditionally accepted, yielding an acceptance rate of 30.28%. Following the acceptance notifications, resulting in a final publication count of 105 papers. These were distributed across publication venues as follows: 59 papers were selected for journal publication in Computer Graphics Forum, while 50 papers were accepted to the Conference Track and published in the Proceedings. Additionally, 6 papers were recommended for fast-track review with major revisions for future Computer Graphics Forum consideration.</p><p>The accepted papers showcase the breadth of modern computer graphics research, spanning computational photography, geometry and mesh processing, appearance, shading, texture, rendering technologies, 3D scanning and analysis, physical simulation, human animation and motion capture, crowd and cloth simulation, 3D printing and fabrication, dig
{"title":"Front Matter","authors":"","doi":"10.1111/cgf.14853","DOIUrl":"https://doi.org/10.1111/cgf.14853","url":null,"abstract":"<p>The 32nd Pacific Conference on Computer Graphics and Applications</p><p>Huangshan (Yellow Mountain), China</p><p>October 13 – 16, 2024</p><p><b>Conference Co-Chairs</b></p><p>Jan Bender, RWTH Aachen, Germany</p><p>Ligang Liu, University of Science and Technology of China, China</p><p>Denis Zorin, New York University, USA</p><p><b>Program Co-Chairs</b></p><p>Renjie Chen, University of Science and Technology of China, China</p><p>Tobias Ritschel, University College London, UK</p><p>Emily Whiting, Boston University, USA</p><p><b>Organization Co-Chairs</b></p><p>Xiao-Ming Fu, University of Science and Technology of China, China</p><p>Jianwei Hu, Huangshan University, China</p><p>The 2024 Pacific Graphics Conference, held in the scenic city of Huangshan, China from October 13-16, marked a milestone year with record-breaking participation and submissions. As one of the premier forums for computer graphics research, the conference maintained its high standards of academic excellence while taking measures to handle unprecedented submission volumes.</p><p>This year saw an extraordinary 360 full paper submissions, the highest in Pacific Graphics history. To maintain our rigorous review standards, we implemented a streamlined process including an initial sorting committee and desk review phase. Of the 305 submissions that proceeded to full review, each received a minimum of 3 reviews, with an average of 3.76 reviews per submission. Our double-blind review process was managed by an International Program Committee (IPC) comprising 112 experts, carefully selected to ensure regular renewal of perspectives in the field.</p><p>In the review process, each submission was assigned to two IPC members as primary and secondary reviewers. These reviewers, in turn, invited two additional tertiary reviewers, ensuring comprehensive evaluation. Authors were provided a five-day window to submit 1,000-word rebuttals addressing reviewer comments and potential misunderstandings. This year's IPC meeting was conducted virtually over one week through asynchronous discussions.</p><p>From the initial 360 submissions, 109 papers were conditionally accepted, yielding an acceptance rate of 30.28%. Following the acceptance notifications, resulting in a final publication count of 105 papers. These were distributed across publication venues as follows: 59 papers were selected for journal publication in Computer Graphics Forum, while 50 papers were accepted to the Conference Track and published in the Proceedings. Additionally, 6 papers were recommended for fast-track review with major revisions for future Computer Graphics Forum consideration.</p><p>The accepted papers showcase the breadth of modern computer graphics research, spanning computational photography, geometry and mesh processing, appearance, shading, texture, rendering technologies, 3D scanning and analysis, physical simulation, human animation and motion capture, crowd and cloth simulation, 3D printing and fabrication, dig","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":"i-xxii"},"PeriodicalIF":2.7,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cgf.14853","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we address the problem of plausible object placement for the challenging task of realistic image composition. We propose DiffPop, the first framework that utilizes plausibility-guided denoising diffusion probabilistic model to learn the scale and spatial relations among multiple objects and the corresponding scene image. First, we train an unguided diffusion model to directly learn the object placement parameters in a self-supervised manner. Then, we develop a human-in-the-loop pipeline which exploits human labeling on the diffusion-generated composite images to provide the weak supervision for training a structural plausibility classifier. The classifier is further used to guide the diffusion sampling process towards generating the plausible object placement. Experimental results verify the superiority of our method for producing plausible and diverse composite images on the new Cityscapes-OP dataset and the public OPA dataset, as well as demonstrate its potential in applications such as data augmentation and multi-object placement tasks. Our dataset and code will be released.
{"title":"DiffPop: Plausibility-Guided Object Placement Diffusion for Image Composition","authors":"Jiacheng Liu, Hang Zhou, Shida Wei, Rui Ma","doi":"10.1111/cgf.15246","DOIUrl":"https://doi.org/10.1111/cgf.15246","url":null,"abstract":"<p>In this paper, we address the problem of plausible object placement for the challenging task of realistic image composition. We propose DiffPop, the first framework that utilizes plausibility-guided denoising diffusion probabilistic model to learn the scale and spatial relations among multiple objects and the corresponding scene image. First, we train an unguided diffusion model to directly learn the object placement parameters in a self-supervised manner. Then, we develop a human-in-the-loop pipeline which exploits human labeling on the diffusion-generated composite images to provide the weak supervision for training a structural plausibility classifier. The classifier is further used to guide the diffusion sampling process towards generating the plausible object placement. Experimental results verify the superiority of our method for producing plausible and diverse composite images on the new Cityscapes-OP dataset and the public OPA dataset, as well as demonstrate its potential in applications such as data augmentation and multi-object placement tasks. Our dataset and code will be released.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent advancements in generative models have enabled image editing very effective with impressive results. By extending this progress to 3D geometry models, we introduce iShapEditing, a novel framework for 3D shape editing which is applicable to both generated and real shapes. Users manipulate shapes by dragging handle points to corresponding targets, offering an intuitive and intelligent editing interface. Leveraging the Triplane Diffusion model and robust intermediate feature correspondence, our framework utilizes classifier guidance to adjust noise representations during sampling process, ensuring alignment with user expectations while preserving plausibility. For real shapes, we employ shape predictions at each time step alongside a DDPM-based inversion algorithm to derive their latent codes, facilitating seamless editing. iShapEditing provides effective and intelligent control over shapes without the need for additional model training or fine-tuning. Experimental examples demonstrate the effectiveness and superiority of our method in terms of editing accuracy and plausibility.
{"title":"iShapEditing: Intelligent Shape Editing with Diffusion Models","authors":"Jing Li, Juyong Zhang, Falai Chen","doi":"10.1111/cgf.15253","DOIUrl":"https://doi.org/10.1111/cgf.15253","url":null,"abstract":"<p>Recent advancements in generative models have enabled image editing very effective with impressive results. By extending this progress to 3D geometry models, we introduce iShapEditing, a novel framework for 3D shape editing which is applicable to both generated and real shapes. Users manipulate shapes by dragging handle points to corresponding targets, offering an intuitive and intelligent editing interface. Leveraging the Triplane Diffusion model and robust intermediate feature correspondence, our framework utilizes classifier guidance to adjust noise representations during sampling process, ensuring alignment with user expectations while preserving plausibility. For real shapes, we employ shape predictions at each time step alongside a DDPM-based inversion algorithm to derive their latent codes, facilitating seamless editing. iShapEditing provides effective and intelligent control over shapes without the need for additional model training or fine-tuning. Experimental examples demonstrate the effectiveness and superiority of our method in terms of editing accuracy and plausibility.</p>","PeriodicalId":10687,"journal":{"name":"Computer Graphics Forum","volume":"43 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}