首页 > 最新文献

Computers & Graphics-Uk最新文献

英文 中文
Editorial Note Issue 134: Advancing Graphics, Visualization, and Extended Reality 编辑说明第134期:推进图形,可视化和扩展现实
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2026-02-17 DOI: 10.1016/j.cag.2026.104549
Joaquim Jorge (Editor-in-Chief)
{"title":"Editorial Note Issue 134: Advancing Graphics, Visualization, and Extended Reality","authors":"Joaquim Jorge (Editor-in-Chief)","doi":"10.1016/j.cag.2026.104549","DOIUrl":"10.1016/j.cag.2026.104549","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104549"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147394648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Locomotion in CAVE: Enhancing immersion through full-body motion CAVE中的运动:通过全身运动增强沉浸感
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2025-12-06 DOI: 10.1016/j.cag.2025.104510
Xiaohui Li , Xiaolong Liu , Zhongchen Shi , Wei Chen , Liang Xie , Meng Gai , Jun Cao , Suxia Zhang , Erwei Yin
Cave Automatic Virtual Environment (CAVE) is one of the virtual reality (VR) immersive devices currently used to present virtual environments. However, the locomotion methods in the CAVE are limited by unnatural interaction methods, severely hindering the user experience and immersion in the CAVE. We proposed a locomotion framework for CAVE environments aimed at enhancing the immersive locomotion experience through optimized human motion recognition technology. Firstly, we construct a four-sided display CAVE system, then through the dynamic method based on Perspective-n-Point to calibrate the camera, using the obtained camera intrinsics and extrinsic parameters, and an action recognition architecture to get the action category. At last, transform the action category to a graphical workstation that renders display effects on the screen. We designed a user study to validate the effectiveness of our method. Compared to the traditional methods, our method has significant improvements in realness and self-presence in the virtual environment, effectively reducing motion sickness.
洞穴自动虚拟环境(Cave Automatic Virtual Environment,简称Cave)是目前用于呈现虚拟环境的虚拟现实(VR)沉浸式设备之一。然而,CAVE中的运动方式受到非自然交互方式的限制,严重阻碍了用户在CAVE中的体验和沉浸感。我们提出了一个洞穴环境的运动框架,旨在通过优化人体运动识别技术来增强沉浸式运动体验。首先,我们构建了一个四面显示的CAVE系统,然后通过基于Perspective-n-Point的动态方法对摄像机进行标定,利用得到的摄像机的内在参数和外在参数,以及动作识别体系结构得到动作类别。最后,将动作类别转换为在屏幕上呈现显示效果的图形工作站。我们设计了一个用户研究来验证我们方法的有效性。与传统方法相比,我们的方法在虚拟环境中的真实感和自我存在性方面有显著提高,有效地减少了晕动病。
{"title":"Locomotion in CAVE: Enhancing immersion through full-body motion","authors":"Xiaohui Li ,&nbsp;Xiaolong Liu ,&nbsp;Zhongchen Shi ,&nbsp;Wei Chen ,&nbsp;Liang Xie ,&nbsp;Meng Gai ,&nbsp;Jun Cao ,&nbsp;Suxia Zhang ,&nbsp;Erwei Yin","doi":"10.1016/j.cag.2025.104510","DOIUrl":"10.1016/j.cag.2025.104510","url":null,"abstract":"<div><div>Cave Automatic Virtual Environment (CAVE) is one of the virtual reality (VR) immersive devices currently used to present virtual environments. However, the locomotion methods in the CAVE are limited by unnatural interaction methods, severely hindering the user experience and immersion in the CAVE. We proposed a locomotion framework for CAVE environments aimed at enhancing the immersive locomotion experience through optimized human motion recognition technology. Firstly, we construct a four-sided display CAVE system, then through the dynamic method based on Perspective-n-Point to calibrate the camera, using the obtained camera intrinsics and extrinsic parameters, and an action recognition architecture to get the action category. At last, transform the action category to a graphical workstation that renders display effects on the screen. We designed a user study to validate the effectiveness of our method. Compared to the traditional methods, our method has significant improvements in realness and self-presence in the virtual environment, effectively reducing motion sickness.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104510"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Consistent orientation normal vector estimation for scattered point cloud 散射点云的一致方向法向量估计
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2026-01-19 DOI: 10.1016/j.cag.2026.104534
Hui Wang, Ming Li, QingYue Wei
Accurate normal vector estimation for scattered point clouds is a fundamental and challenging task in three-dimensional reconstruction. We introduce a novel framework that integrates curvature-aware spherical fitting with robust kernel regression to estimate reliable and consistently oriented normal vectors. Our approach explicitly models local geometry using spherical surfaces, enabling precise capture of geometric details in high-variability regions, including sharp features and high-curvature areas. The kernel regression mechanism adaptively weights neighboring points based on spatial proximity and geometric consistency, effectively suppressing the effects of noise, outliers, and non-uniform sampling. We further propose a variational model that combines local geometric constraints with global propagation to ensure orientation consistency across the entire point cloud data. Extensive experiments demonstrated that our method can effectively handle challenging conditions, including noise, outliers, surfaces in close proximity, non-uniform sampling, and sharp features, achieving superior accuracy and robustness compared with existing approaches.
散射点云的准确法向量估计是三维重建的基础和难点。我们引入了一种新的框架,该框架将曲率感知球面拟合与鲁棒核回归相结合,以估计可靠且一致定向的法向量。我们的方法使用球面明确地模拟局部几何形状,从而能够精确捕获高可变性区域的几何细节,包括尖锐特征和高曲率区域。核回归机制基于空间接近性和几何一致性自适应加权相邻点,有效抑制噪声、异常值和非均匀采样的影响。我们进一步提出了一种结合局部几何约束和全局传播的变分模型,以确保整个点云数据的方向一致性。大量的实验表明,我们的方法可以有效地处理具有挑战性的条件,包括噪声、异常值、近距离表面、非均匀采样和尖锐特征,与现有方法相比,具有更高的准确性和鲁棒性。
{"title":"Consistent orientation normal vector estimation for scattered point cloud","authors":"Hui Wang,&nbsp;Ming Li,&nbsp;QingYue Wei","doi":"10.1016/j.cag.2026.104534","DOIUrl":"10.1016/j.cag.2026.104534","url":null,"abstract":"<div><div>Accurate normal vector estimation for scattered point clouds is a fundamental and challenging task in three-dimensional reconstruction. We introduce a novel framework that integrates curvature-aware spherical fitting with robust kernel regression to estimate reliable and consistently oriented normal vectors. Our approach explicitly models local geometry using spherical surfaces, enabling precise capture of geometric details in high-variability regions, including sharp features and high-curvature areas. The kernel regression mechanism adaptively weights neighboring points based on spatial proximity and geometric consistency, effectively suppressing the effects of noise, outliers, and non-uniform sampling. We further propose a variational model that combines local geometric constraints with global propagation to ensure orientation consistency across the entire point cloud data. Extensive experiments demonstrated that our method can effectively handle challenging conditions, including noise, outliers, surfaces in close proximity, non-uniform sampling, and sharp features, achieving superior accuracy and robustness compared with existing approaches.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104534"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated generation of housing layouts using graph-rules 使用图形规则自动生成房屋布局
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2025-11-29 DOI: 10.1016/j.cag.2025.104506
Shiksha, Rohit Lohani, Krishnendra Shekhawat, Arsh Singh, Karan Agrawal
In architectural design, floor planning plays a crucial role in shaping the functionality and efficiency of a building, requiring designers to strike a balance between diverse and often conflicting objectives. It is a multi-constraint problem, and over the past few years, many tools have been proposed to generate floor plans automatically, most of which are based on AI/ML techniques.
In this paper, we propose software based on graph algorithms for the automated generation of housing layouts (floor plans) having rectangular boundaries while addressing adjacency and non-adjacency constraints, room positions (interior or exterior), and circulations. Once the user provides the input constraints (where many of them are built-in, say dining is on the exterior and adjacent to the kitchen, and the kitchen is not adjacent to the toilets), the software will generate a range of graphs that represent these connections and use them to generate all possible dimensioned housing layout options for users to choose from.
在建筑设计中,楼层规划在塑造建筑的功能和效率方面起着至关重要的作用,要求设计师在多样化和经常相互冲突的目标之间取得平衡。这是一个多约束问题,在过去的几年里,已经提出了许多工具来自动生成平面图,其中大多数是基于AI/ML技术。在本文中,我们提出了基于图形算法的软件,用于自动生成具有矩形边界的房屋布局(平面图),同时解决邻接和非邻接约束,房间位置(内部或外部)和循环。一旦用户提供了输入限制(其中许多是内置的,比如餐厅在外面,靠近厨房,厨房不靠近厕所),软件就会生成一系列表示这些连接的图表,并使用它们生成所有可能的房屋尺寸布局选项,供用户选择。
{"title":"Automated generation of housing layouts using graph-rules","authors":"Shiksha,&nbsp;Rohit Lohani,&nbsp;Krishnendra Shekhawat,&nbsp;Arsh Singh,&nbsp;Karan Agrawal","doi":"10.1016/j.cag.2025.104506","DOIUrl":"10.1016/j.cag.2025.104506","url":null,"abstract":"<div><div>In architectural design, floor planning plays a crucial role in shaping the functionality and efficiency of a building, requiring designers to strike a balance between diverse and often conflicting objectives. It is a multi-constraint problem, and over the past few years, many tools have been proposed to generate floor plans automatically, most of which are based on AI/ML techniques.</div><div>In this paper, we propose software based on graph algorithms for the automated generation of housing layouts (floor plans) having rectangular boundaries while addressing adjacency and non-adjacency constraints, room positions (interior or exterior), and circulations. Once the user provides the input constraints (where many of them are built-in, say dining is on the exterior and adjacent to the kitchen, and the kitchen is not adjacent to the toilets), the software will generate a range of graphs that represent these connections and use them to generate all possible dimensioned housing layout options for users to choose from.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104506"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145693887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy-based haptic rendering for real-time surgical simulation 基于能量的实时外科模拟触觉渲染
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2025-12-23 DOI: 10.1016/j.cag.2025.104524
Lei He , Mingbo Hu , Wenli Xiu , Hongyu Wu , Siming Zheng , Shuai Li , Qian Dong , Aimin Hao
Haptic-based surgical simulation is widely utilized for training surgical skills. However, simulating the interaction between rigid surgical instruments and soft tissues presents significant technical challenges. In this paper, we propose an energy-based haptic rendering method to achieve both large deformations and rigid–soft haptic interaction. Different from existing methods, both the rigid tools and soft tissues are modeled by an energy-based virtual coupling system. The constraints of soft deformation, tool-object interaction and haptic rendering are defined by potential energy. Benefit from energy-based constraints, we can realize complex surgical operations, such as inserting tools into soft tissue. The virtual coupling of soft tissue enables the separation of haptic interaction into two components: soft deformation with high computational complexity, and high-frequency haptic rendering. The soft deformation with shape constraints is accelerated GPU at a relatively low frequency(60Hz 100Hz), while the haptic rendering runs in another thread at a high frequency ( 1000Hz). We have implemented haptic simulation for two commonly used surgical operations, pressing and pulling. The experimental results show that our method can achieve stable feedback force and non-penetration between the tool and soft tissue under the condition of large soft deformation.
基于触觉的手术模拟被广泛应用于外科技能的训练。然而,模拟刚性手术器械和软组织之间的相互作用提出了重大的技术挑战。在本文中,我们提出了一种基于能量的触觉渲染方法,以实现大变形和刚软触觉交互。与现有方法不同,该方法采用基于能量的虚拟耦合系统对刚性工具和软组织进行建模。软变形约束、工具-物体交互约束和触觉渲染约束由势能定义。得益于基于能量的约束,我们可以实现复杂的外科手术,例如将工具插入软组织。软组织的虚拟耦合使触觉交互分离为两个组成部分:具有高计算复杂度的软变形和高频触觉渲染。具有形状约束的软变形以相对较低的频率(60Hz ~ 100Hz)在GPU上加速,而触觉渲染以较高的频率(≥1000Hz)在另一个线程中运行。我们已经实现了两种常用的外科手术触觉模拟,按压和拉动。实验结果表明,在软变形较大的情况下,该方法可以实现刀具与软组织之间稳定的反馈力和不侵彻。
{"title":"Energy-based haptic rendering for real-time surgical simulation","authors":"Lei He ,&nbsp;Mingbo Hu ,&nbsp;Wenli Xiu ,&nbsp;Hongyu Wu ,&nbsp;Siming Zheng ,&nbsp;Shuai Li ,&nbsp;Qian Dong ,&nbsp;Aimin Hao","doi":"10.1016/j.cag.2025.104524","DOIUrl":"10.1016/j.cag.2025.104524","url":null,"abstract":"<div><div>Haptic-based surgical simulation is widely utilized for training surgical skills. However, simulating the interaction between rigid surgical instruments and soft tissues presents significant technical challenges. In this paper, we propose an energy-based haptic rendering method to achieve both large deformations and rigid–soft haptic interaction. Different from existing methods, both the rigid tools and soft tissues are modeled by an energy-based virtual coupling system. The constraints of soft deformation, tool-object interaction and haptic rendering are defined by potential energy. Benefit from energy-based constraints, we can realize complex surgical operations, such as inserting tools into soft tissue. The virtual coupling of soft tissue enables the separation of haptic interaction into two components: soft deformation with high computational complexity, and high-frequency haptic rendering. The soft deformation with shape constraints is accelerated GPU at a relatively low frequency(60Hz <span><math><mo>∼</mo></math></span> 100Hz), while the haptic rendering runs in another thread at a high frequency (<span><math><mo>≥</mo></math></span> 1000Hz). We have implemented haptic simulation for two commonly used surgical operations, pressing and pulling. The experimental results show that our method can achieve stable feedback force and non-penetration between the tool and soft tissue under the condition of large soft deformation.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104524"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse-to-dense light field reconstruction based on Spatial–Angular Multi-Dimensional Interaction and Guided Residual Networks 基于空间-角度多维相互作用和制导残差网络的稀疏-稠密光场重建
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2025-12-16 DOI: 10.1016/j.cag.2025.104525
Haijiao Gu, Yan Piao
Dense light fields contain rich spatial and angular information, making them highly valuable for applications such as depth estimation, 3D reconstruction, and multi-view elemental image synthesis. Light-field cameras capture both spatial and angular scene information in a single shot. However, due to high hardware requirements and substantial storage costs, practical acquisitions often yield only sparse light-field maps. To address this problem, this paper proposes an efficient end-to-end sparse-to-dense light-field reconstruction method based on Spatial–Angular Multi-Dimensional Interaction and Guided Residual Networks. The Spatial–Angular Multi-Dimensional Interaction Module (SAMDIM) fully exploits the four-dimensional structural information of light-field image data in both spatial and angular domains. It performs dual-modal interaction across spatial and angular dimensions to generate dense subviews. The channel attention mechanism within the interaction module significantly improves the image quality of these dense subviews. Finally, the Guided Residual Refinement Module (GRRM) further enhances the texture details of the generated dense subviews, enhancing the reconstruction quality of the dense light field. Experimental results demonstrate that our proposed network model achieves clear advantages over state-of-the-art methods in both visual quality and quantitative metrics on real-world datasets.
密集光场包含丰富的空间和角度信息,对深度估计、三维重建和多视点元素图像合成等应用具有重要价值。光场相机在一次拍摄中捕捉空间和角度的场景信息。然而,由于高硬件要求和大量存储成本,实际获取往往只产生稀疏的光场地图。为了解决这一问题,本文提出了一种基于空间-角度多维交互和制导残差网络的端到端稀疏到密集光场重构方法。空间-角度多维交互模块(SAMDIM)充分利用了光场图像数据在空间和角度两个领域的四维结构信息。它在空间和角度维度上执行双模态交互,以生成密集的子视图。交互模块中的通道注意机制显著提高了这些密集子视图的图像质量。最后,利用制导残差细化模块(GRRM)对生成的密集子视图的纹理细节进行进一步增强,提高了密集光场的重建质量。实验结果表明,我们提出的网络模型在现实世界数据集的视觉质量和定量指标方面都比最先进的方法有明显的优势。
{"title":"Sparse-to-dense light field reconstruction based on Spatial–Angular Multi-Dimensional Interaction and Guided Residual Networks","authors":"Haijiao Gu,&nbsp;Yan Piao","doi":"10.1016/j.cag.2025.104525","DOIUrl":"10.1016/j.cag.2025.104525","url":null,"abstract":"<div><div>Dense light fields contain rich spatial and angular information, making them highly valuable for applications such as depth estimation, 3D reconstruction, and multi-view elemental image synthesis. Light-field cameras capture both spatial and angular scene information in a single shot. However, due to high hardware requirements and substantial storage costs, practical acquisitions often yield only sparse light-field maps. To address this problem, this paper proposes an efficient end-to-end sparse-to-dense light-field reconstruction method based on Spatial–Angular Multi-Dimensional Interaction and Guided Residual Networks. The Spatial–Angular Multi-Dimensional Interaction Module (SAMDIM) fully exploits the four-dimensional structural information of light-field image data in both spatial and angular domains. It performs dual-modal interaction across spatial and angular dimensions to generate dense subviews. The channel attention mechanism within the interaction module significantly improves the image quality of these dense subviews. Finally, the Guided Residual Refinement Module (GRRM) further enhances the texture details of the generated dense subviews, enhancing the reconstruction quality of the dense light field. Experimental results demonstrate that our proposed network model achieves clear advantages over state-of-the-art methods in both visual quality and quantitative metrics on real-world datasets.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104525"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cell-constrained particles for incompressible fluids 不可压缩流体的细胞约束粒子
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2026-01-17 DOI: 10.1016/j.cag.2026.104532
Zohar Levi
Incompressibility is a fundamental condition in most fluid models. Accumulation of simulation errors violates it and causes fluid volume loss. Prior work has proposed correction methods to combat this drift, but they remain approximate and can fail in extreme scenarios. We present a particle-in-cell method that strictly enforces a grid-based definition of discrete incompressibility at every time step.
We formulate a linear programming (LP) problem that bounds the number of particles that end up in each grid cell. To scale this to large 3D domains, we introduce a narrow-band variant with specialized band-interface constraints to ensure volume preservation. Further acceleration is achieved by simplifying the problem and adding a band-specific correction step that is formulated as a minimum-cost flow problem (MCFP).
We also address coupling with moving solids by incorporating obstacle-aware penalties directly into our optimization. In extreme test scenes, we demonstrate strict volume preservation and robust behavior where state-of-the-art methods exhibit noticeable volume drift or artifacts.
不可压缩性是大多数流体模型的基本条件。模拟误差的累积违背了这一原则,并导致流体体积损失。先前的工作已经提出了纠正方法来对抗这种漂移,但它们仍然是近似的,并且在极端情况下可能会失败。我们提出了一种细胞内粒子的方法,该方法严格执行基于网格的离散不可压缩性定义。我们制定了一个线性规划(LP)问题,该问题限制了最终进入每个网格单元的粒子数量。为了将其扩展到大型3D域,我们引入了具有专用带接口约束的窄带变体,以确保体积保存。通过简化问题并添加特定波段的校正步骤(即最小成本流问题(MCFP)),进一步加快了速度。我们还通过将障碍物感知惩罚直接纳入优化中来解决与移动固体的耦合问题。在极端的测试场景中,我们展示了严格的体积保存和健壮的行为,其中最先进的方法表现出明显的体积漂移或伪影。
{"title":"Cell-constrained particles for incompressible fluids","authors":"Zohar Levi","doi":"10.1016/j.cag.2026.104532","DOIUrl":"10.1016/j.cag.2026.104532","url":null,"abstract":"<div><div>Incompressibility is a fundamental condition in most fluid models. Accumulation of simulation errors violates it and causes fluid volume loss. Prior work has proposed correction methods to combat this drift, but they remain approximate and can fail in extreme scenarios. We present a particle-in-cell method that <em>strictly</em> enforces a grid-based definition of discrete incompressibility at every time step.</div><div>We formulate a linear programming (LP) problem that bounds the number of particles that end up in each grid cell. To scale this to large 3D domains, we introduce a narrow-band variant with specialized band-interface constraints to ensure volume preservation. Further acceleration is achieved by simplifying the problem and adding a band-specific correction step that is formulated as a minimum-cost flow problem (MCFP).</div><div>We also address coupling with moving solids by incorporating obstacle-aware penalties directly into our optimization. In extreme test scenes, we demonstrate strict volume preservation and robust behavior where state-of-the-art methods exhibit noticeable volume drift or artifacts.</div></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104532"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Acknowledging our reviewer community 感谢我们的审稿人社区
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2026-02-19 DOI: 10.1016/j.cag.2026.104550
{"title":"Acknowledging our reviewer community","authors":"","doi":"10.1016/j.cag.2026.104550","DOIUrl":"10.1016/j.cag.2026.104550","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104550"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147394649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to the Computer Graphics & Visual Computing conference 2024 special section 计算机图形学与视觉计算会议2024特别部分前言
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2026-01-13 DOI: 10.1016/j.cag.2026.104530
Aidan Slingsby, Mai Elshehaly, Kai Xu
{"title":"Foreword to the Computer Graphics & Visual Computing conference 2024 special section","authors":"Aidan Slingsby,&nbsp;Mai Elshehaly,&nbsp;Kai Xu","doi":"10.1016/j.cag.2026.104530","DOIUrl":"10.1016/j.cag.2026.104530","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104530"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to the special section on recent advances in graphics and interaction (RAGI 2025) 关于图形和交互的最新进展的特别部分的前言(RAGI 2025)
IF 2.8 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2026-02-01 Epub Date: 2025-12-04 DOI: 10.1016/j.cag.2025.104509
Tomás Alves, José Creissac Campos, Alan Chalmers
{"title":"Foreword to the special section on recent advances in graphics and interaction (RAGI 2025)","authors":"Tomás Alves,&nbsp;José Creissac Campos,&nbsp;Alan Chalmers","doi":"10.1016/j.cag.2025.104509","DOIUrl":"10.1016/j.cag.2025.104509","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":"134 ","pages":"Article 104509"},"PeriodicalIF":2.8,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145796673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers & Graphics-Uk
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1