首页 > 最新文献

ACM Transactions on Graphics最新文献

英文 中文
Fast Galerkin Multigrid Method for Unstructured Meshes 非结构化网格的快速Galerkin多重网格法
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763327
Jia-Ming Lu, Tailing Yuan, Zhe-Han Mo, Shi-Min Hu
We present a novel multigrid solver framework that significantly advances the efficiency of physical simulation for unstructured meshes. While multi-grid methods theoretically offer linear scaling, their practical implementation for deformable body simulations faces substantial challenges, particularly on GPUs. Our framework achieves up to 6.9× speedup over traditional methods through an innovative combination of matrix-free vertex block Jacobi smoothing with a Full Approximation Scheme (FAS), enabling both piecewise constant and linear Galerkin formulations without the computational burden of dense coarse matrices. Our approach demonstrates superior performance across varying mesh resolutions and material stiffness values, maintaining consistent convergence even under extreme deformations and challenging initial configurations. Comprehensive evaluations against state-of-the-art methods confirm our approach achieves lower simulation error with reduced computational cost, enabling simulation of tetrahedral meshes with over one million vertices at approximately one frame per second on modern GPUs.
我们提出了一种新的多网格求解器框架,显著提高了非结构化网格物理模拟的效率。虽然多网格方法在理论上提供线性缩放,但它们在可变形身体模拟中的实际实现面临着巨大的挑战,特别是在gpu上。我们的框架通过将无矩阵顶点块雅可比平滑与全近似方案(FAS)的创新组合,实现了比传统方法高达6.9倍的加速,实现了分段常数和线性伽辽金公式,而无需密集粗矩阵的计算负担。我们的方法在不同的网格分辨率和材料刚度值上表现出卓越的性能,即使在极端变形和具有挑战性的初始配置下也能保持一致的收敛性。对最先进方法的综合评估证实,我们的方法实现了更低的模拟误差和更低的计算成本,能够在现代gpu上以大约每秒一帧的速度模拟超过一百万个顶点的四面体网格。
{"title":"Fast Galerkin Multigrid Method for Unstructured Meshes","authors":"Jia-Ming Lu, Tailing Yuan, Zhe-Han Mo, Shi-Min Hu","doi":"10.1145/3763327","DOIUrl":"https://doi.org/10.1145/3763327","url":null,"abstract":"We present a novel multigrid solver framework that significantly advances the efficiency of physical simulation for unstructured meshes. While multi-grid methods theoretically offer linear scaling, their practical implementation for deformable body simulations faces substantial challenges, particularly on GPUs. Our framework achieves up to 6.9× speedup over traditional methods through an innovative combination of matrix-free vertex block Jacobi smoothing with a Full Approximation Scheme (FAS), enabling both piecewise constant and linear Galerkin formulations without the computational burden of dense coarse matrices. Our approach demonstrates superior performance across varying mesh resolutions and material stiffness values, maintaining consistent convergence even under extreme deformations and challenging initial configurations. Comprehensive evaluations against state-of-the-art methods confirm our approach achieves lower simulation error with reduced computational cost, enabling simulation of tetrahedral meshes with over one million vertices at approximately one frame per second on modern GPUs.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"203 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BrepGPT: Autoregressive B-rep Generation with Voronoi Half-Patch BrepGPT:基于Voronoi半补丁的自回归B-rep生成
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763323
Pu Li, Wenhao Zhang, Weize Quan, Biao Zhang, Peter Wonka, Dongming Yan
Boundary representation (B-rep) is the de facto standard for CAD model representation in modern industrial design. The intricate coupling between geometric and topological elements in B-rep structures has forced existing generative methods to rely on cascaded multi-stage networks, resulting in error accumulation and computational inefficiency. We present BrepGPT, a single-stage autoregressive framework for B-rep generation. Our key innovation lies in the Voronoi Half-Patch (VHP) representation, which decomposes B-reps into unified local units by assigning geometry to nearest half-edges and sampling their next pointers. Unlike hierarchical representations that require multiple distinct encodings for different structural levels, our VHP representation facilitates unifying geometric attributes and topological relations in a single, coherent format. We further leverage dual VQ-VAEs to encode both vertex topology and Voronoi Half-Patches into vertex-based tokens, achieving a more compact sequential encoding. A decoder-only Transformer is then trained to autoregressively predict these tokens, which are subsequently mapped to vertex-based features and decoded into complete B-rep models. Experiments demonstrate that BrepGPT achieves state-of-the-art performance in unconditional B-rep generation. The framework also exhibits versatility in various applications, including conditional generation from category labels, point clouds, text descriptions, and images, as well as B-rep autocompletion and interpolation.
边界表示(B-rep)是现代工业设计中CAD模型表示的事实上的标准。B-rep结构中几何元素和拓扑元素之间的复杂耦合迫使现有的生成方法依赖于级联多阶段网络,导致误差累积和计算效率低下。我们提出了BrepGPT,一个用于B-rep生成的单阶段自回归框架。我们的关键创新在于Voronoi Half-Patch (VHP)表示,它将b -表示分解为统一的局部单位,通过将几何形状分配给最近的半边并采样它们的下一个指针。与需要对不同结构级别使用多种不同编码的分层表示不同,我们的VHP表示有助于将几何属性和拓扑关系统一为单一的、连贯的格式。我们进一步利用双VQ-VAEs将顶点拓扑和Voronoi半补丁编码为基于顶点的令牌,从而实现更紧凑的顺序编码。然后训练仅解码器的Transformer自动回归地预测这些标记,这些标记随后被映射到基于顶点的特征并解码为完整的B-rep模型。实验表明,BrepGPT在无条件B-rep生成中达到了最先进的性能。该框架还在各种应用中展示了多功能性,包括从类别标签、点云、文本描述和图像中条件生成,以及B-rep自动补全和插值。
{"title":"BrepGPT: Autoregressive B-rep Generation with Voronoi Half-Patch","authors":"Pu Li, Wenhao Zhang, Weize Quan, Biao Zhang, Peter Wonka, Dongming Yan","doi":"10.1145/3763323","DOIUrl":"https://doi.org/10.1145/3763323","url":null,"abstract":"Boundary representation (B-rep) is the de facto standard for CAD model representation in modern industrial design. The intricate coupling between geometric and topological elements in B-rep structures has forced existing generative methods to rely on cascaded multi-stage networks, resulting in error accumulation and computational inefficiency. We present BrepGPT, a single-stage autoregressive framework for B-rep generation. Our key innovation lies in the Voronoi Half-Patch (VHP) representation, which decomposes B-reps into unified local units by assigning geometry to nearest half-edges and sampling their next pointers. Unlike hierarchical representations that require multiple distinct encodings for different structural levels, our VHP representation facilitates unifying geometric attributes and topological relations in a single, coherent format. We further leverage dual VQ-VAEs to encode both vertex topology and Voronoi Half-Patches into vertex-based tokens, achieving a more compact sequential encoding. A decoder-only Transformer is then trained to autoregressively predict these tokens, which are subsequently mapped to vertex-based features and decoded into complete B-rep models. Experiments demonstrate that BrepGPT achieves state-of-the-art performance in unconditional B-rep generation. The framework also exhibits versatility in various applications, including conditional generation from category labels, point clouds, text descriptions, and images, as well as B-rep autocompletion and interpolation.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"101 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fovea Stacking: Imaging with Dynamic Localized Aberration Correction 中央凹叠加:动态局部像差校正成像
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763278
Shi Mao, Yogeshwar Nath Mishra, Wolfgang Heidrich
The desire for cameras with smaller form factors has recently led to a push for exploring computational imaging systems with reduced optical complexity such as a smaller number of lens elements. Unfortunately such simplified optical systems usually suffer from severe aberrations, especially in off-axis regions, which can be difficult to correct purely in software. In this paper we introduce Fovea Stacking, a new type of imaging system that utilizes an emerging dynamic optical component called the deformable phase plate (DPP) for localized aberration correction anywhere on the image sensor. By optimizing DPP deformations through a differentiable optical model, off-axis aberrations are corrected locally, producing a foveated image with enhanced sharpness at the fixation point - analogous to the eye's fovea. Stacking multiple such foveated images, each with a different fixation point, yields a composite image free from aberrations. To efficiently cover the entire field of view, we propose joint optimization of DPP deformations under imaging budget constraints. Due to the DPP device's non-linear behavior, we introduce a neural network-based control model for improved agreement between simulation and hardware performance. We further demonstrated that for extended depth-of-field imaging, Fovea Stacking outperforms traditional focus stacking in image quality. By integrating object detection or eye-tracking, the system can dynamically adjust the lens to track the object of interest-enabling real-time foveated video suitable for downstream applications such as surveillance or foveated virtual reality displays.
对更小尺寸相机的渴望最近推动了对光学复杂性降低的计算成像系统的探索,比如镜头元件的数量更少。不幸的是,这种简化的光学系统通常遭受严重的像差,特别是在离轴区域,这可能很难在纯粹的软件中纠正。在本文中,我们介绍了Fovea堆叠,这是一种新型的成像系统,它利用一种新兴的动态光学元件,称为可变形相位板(DPP),在图像传感器的任何地方进行局部像差校正。通过可微光学模型优化DPP变形,局部校正离轴像差,在注视点(类似于眼睛的中央窝)产生具有增强清晰度的注视点图像。叠加多个这样的注视点图像,每个都有不同的注视点,产生一个没有像差的合成图像。为了有效地覆盖整个视场,我们提出了在成像预算约束下DPP变形的联合优化。由于DPP设备的非线性行为,我们引入了一个基于神经网络的控制模型,以提高仿真和硬件性能之间的一致性。我们进一步证明,对于扩展景深成像,中央凹点叠加在图像质量上优于传统的焦点叠加。通过集成物体检测或眼球追踪,系统可以动态调整镜头来跟踪感兴趣的物体,从而实现实时注视点视频,适用于监控或注视点虚拟现实显示等下游应用。
{"title":"Fovea Stacking: Imaging with Dynamic Localized Aberration Correction","authors":"Shi Mao, Yogeshwar Nath Mishra, Wolfgang Heidrich","doi":"10.1145/3763278","DOIUrl":"https://doi.org/10.1145/3763278","url":null,"abstract":"The desire for cameras with smaller form factors has recently led to a push for exploring computational imaging systems with reduced optical complexity such as a smaller number of lens elements. Unfortunately such simplified optical systems usually suffer from severe aberrations, especially in off-axis regions, which can be difficult to correct purely in software. In this paper we introduce Fovea Stacking, a new type of imaging system that utilizes an emerging dynamic optical component called the deformable phase plate (DPP) for localized aberration correction anywhere on the image sensor. By optimizing DPP deformations through a differentiable optical model, off-axis aberrations are corrected locally, producing a foveated image with enhanced sharpness at the fixation point - analogous to the eye's fovea. Stacking multiple such foveated images, each with a different fixation point, yields a composite image free from aberrations. To efficiently cover the entire field of view, we propose joint optimization of DPP deformations under imaging budget constraints. Due to the DPP device's non-linear behavior, we introduce a neural network-based control model for improved agreement between simulation and hardware performance. We further demonstrated that for extended depth-of-field imaging, Fovea Stacking outperforms traditional focus stacking in image quality. By integrating object detection or eye-tracking, the system can dynamically adjust the lens to track the object of interest-enabling real-time foveated video suitable for downstream applications such as surveillance or foveated virtual reality displays.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"1 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
JumpingGS: Level-jump 3D Gaussian Representation for Delicate Textures in Aerial Large-scale Scene Rendering JumpingGS:在空中大规模场景渲染中精细纹理的水平跳跃3D高斯表示
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763347
Jiongming Qin, Kaixuan Zhou, Yu Jiang, Huizhi Zhu, Fei Luo, Chunxia Xiao
Existing 3D Gaussian (3DGS) based methods tend to produce blurriness and artifacts on delicate textures (small objects and high-frequency textures) in aerial large-scale scenes. The reason is that the delicate textures usually occupy a relatively small number of pixels, and the accumulated gradients from loss function are difficult to promote the splitting of 3DGS. To minimize the rendering error, the model will use a small number of large Gaussians to cover these details, resulting in blurriness and artifacts. To solve the above problem, we propose a novel hierarchical Gaussian: JumpingGS. JumpingGS assigns different levels to Gaussians to establish a hierarchical representation. Low-level Gaussians are responsible for the coarse appearance, while high-level Gaussians are responsible for the details. First, we design a splitting strategy that allows low-level Gaussians to skip intermediate levels and directly split the appropriate high-level Gaussians for delicate textures. This level-jump splitting ensures that the weak gradients of delicate textures can always activate a higher level instead of being ignored by the intermediate levels. Second, JumpingGS reduces the gradient and opacity thresholds for density control according to the representation levels, which improves the sensitivity of high-level Gaussians to delicate textures. Third, we design a novel training strategy to detect training views in hard-to-observe regions, and train the model multiple times on these views to alleviate underfitting. Experiments on aerial large-scale scenes demonstrate that JumpingGS outperforms existing 3DGS-based methods, accurately and efficiently recovering delicate textures in large scenes.
现有的基于3D高斯(3DGS)的方法在航空大尺度场景中容易对精细纹理(小物体和高频纹理)产生模糊和伪影。原因是精致的纹理通常占用相对较少的像素,损失函数累积的梯度难以促进3DGS的分裂。为了最小化渲染误差,模型将使用少量的大高斯函数来覆盖这些细节,从而导致模糊和伪影。为了解决上述问题,我们提出了一种新的分层高斯:跳跃高斯。JumpingGS将不同的层次分配给高斯函数以建立层次表示。低级高斯函数负责粗糙的外观,而高级高斯函数负责细节。首先,我们设计了一种分裂策略,允许低水平的高斯函数跳过中间水平,并直接为精细纹理分裂适当的高水平高斯函数。这种关卡跳转分割确保精细纹理的弱梯度总是能够激活更高的关卡,而不是被中间关卡所忽略。其次,JumpingGS根据表示级别降低了密度控制的梯度和不透明度阈值,提高了高阶高斯函数对精细纹理的灵敏度。第三,我们设计了一种新的训练策略来检测难以观察区域的训练视图,并在这些视图上多次训练模型以减轻欠拟合。在航空大场景下的实验表明,JumpingGS能够准确、高效地恢复大场景下的精细纹理,优于现有的基于3dgs的方法。
{"title":"JumpingGS: Level-jump 3D Gaussian Representation for Delicate Textures in Aerial Large-scale Scene Rendering","authors":"Jiongming Qin, Kaixuan Zhou, Yu Jiang, Huizhi Zhu, Fei Luo, Chunxia Xiao","doi":"10.1145/3763347","DOIUrl":"https://doi.org/10.1145/3763347","url":null,"abstract":"Existing 3D Gaussian (3DGS) based methods tend to produce blurriness and artifacts on delicate textures (small objects and high-frequency textures) in aerial large-scale scenes. The reason is that the delicate textures usually occupy a relatively small number of pixels, and the accumulated gradients from loss function are difficult to promote the splitting of 3DGS. To minimize the rendering error, the model will use a small number of large Gaussians to cover these details, resulting in blurriness and artifacts. To solve the above problem, we propose a novel hierarchical Gaussian: JumpingGS. JumpingGS assigns different levels to Gaussians to establish a hierarchical representation. Low-level Gaussians are responsible for the coarse appearance, while high-level Gaussians are responsible for the details. First, we design a splitting strategy that allows low-level Gaussians to skip intermediate levels and directly split the appropriate high-level Gaussians for delicate textures. This level-jump splitting ensures that the weak gradients of delicate textures can always activate a higher level instead of being ignored by the intermediate levels. Second, JumpingGS reduces the gradient and opacity thresholds for density control according to the representation levels, which improves the sensitivity of high-level Gaussians to delicate textures. Third, we design a novel training strategy to detect training views in hard-to-observe regions, and train the model multiple times on these views to alleviate underfitting. Experiments on aerial large-scale scenes demonstrate that JumpingGS outperforms existing 3DGS-based methods, accurately and efficiently recovering delicate textures in large scenes.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"125 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Octahedral Field: Octahedral Prior for Simultaneous Smoothing and Sharp Edge Regularization 神经八面体场:同时平滑和锐边正则化的八面体先验
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763362
Ruichen Zheng, Tao Yu, Ruizhen Hu
Neural implicit representation, the parameterization of a continuous distance function as a Multi-Layer Perceptron (MLP), has emerged as a promising lead in tackling surface reconstruction from unoriented point clouds. In the presence of noise, however, its lack of explicit neighborhood connectivity makes sharp edges identification particularly challenging, hence preventing the separation of smoothing and sharpening operations, as is achievable with its discrete counterparts. In this work, we propose to tackle this challenge with an auxiliary field, the octahedral field. We observe that both smoothness and sharp features in the distance field can be equivalently described by the smoothness in octahedral space. Therefore, by aligning and smoothing an octahedral field alongside the implicit geometry, our method behaves analogously to bilateral filtering, resulting in a smooth reconstruction while preserving sharp edges. Despite being operated purely pointwise, our method outperforms various traditional and neural implicit fitting approaches across extensive experiments, and is very competitive with methods that require normals and data priors. Code and data of our work are available at: https://github.com/Ankbzpx/frame-field.
神经隐式表示,连续距离函数的参数化作为多层感知器(MLP),已经成为解决无方向点云表面重建的一个有希望的领导。然而,在存在噪声的情况下,它缺乏明确的邻域连通性,使得尖锐边缘的识别特别具有挑战性,因此阻止了平滑和锐化操作的分离,正如离散对应的那样可以实现。在这项工作中,我们建议用一个辅助场,八面体场来解决这个挑战。我们观察到距离场的平滑性和尖锐性都可以用八面体空间的平滑性来等价地描述。因此,通过对齐和平滑八面体场与隐式几何,我们的方法的行为类似于双边滤波,导致平滑重建,同时保留尖锐的边缘。尽管是纯逐点操作,但我们的方法在广泛的实验中优于各种传统和神经隐式拟合方法,并且与需要正态线和数据先验的方法相比非常有竞争力。我们工作的代码和数据可在:https://github.com/Ankbzpx/frame-field。
{"title":"Neural Octahedral Field: Octahedral Prior for Simultaneous Smoothing and Sharp Edge Regularization","authors":"Ruichen Zheng, Tao Yu, Ruizhen Hu","doi":"10.1145/3763362","DOIUrl":"https://doi.org/10.1145/3763362","url":null,"abstract":"Neural implicit representation, the parameterization of a continuous distance function as a Multi-Layer Perceptron (MLP), has emerged as a promising lead in tackling surface reconstruction from unoriented point clouds. In the presence of noise, however, its lack of explicit neighborhood connectivity makes sharp edges identification particularly challenging, hence preventing the separation of smoothing and sharpening operations, as is achievable with its discrete counterparts. In this work, we propose to tackle this challenge with an auxiliary field, the <jats:italic toggle=\"yes\">octahedral field.</jats:italic> We observe that both smoothness and sharp features in the distance field can be equivalently described by the smoothness in octahedral space. Therefore, by aligning and smoothing an octahedral field alongside the implicit geometry, our method behaves analogously to bilateral filtering, resulting in a smooth reconstruction while preserving sharp edges. Despite being operated purely pointwise, our method outperforms various traditional and neural implicit fitting approaches across extensive experiments, and is very competitive with methods that require normals and data priors. Code and data of our work are available at: https://github.com/Ankbzpx/frame-field.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"26 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Acoustic Reliefs 声浮雕
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763287
Jeremy Chew, Michal Piovarci, Kangrui Xue, Doug James, Bernd Bickel
We present a framework to optimize and generate Acoustic Reliefs : acoustic diffusers that not only perform well acoustically in scattering sound uniformly in all directions, but are also visually interesting and can approximate user-provided images. To this end, we develop a differentiable acoustics simulator based on the boundary element method, and integrate it with a differentiable renderer coupled with a vision model to jointly optimize for acoustics, appearance, and fabrication constraints at the same time. We generate various examples and fabricate two room-scale reliefs. The result is a validated simulation and optimization scheme for generating acoustic reliefs whose appearances can be guided by a provided image.
我们提出了一个优化和生成声学浮雕的框架:声学扩散器不仅在声学上表现良好,在各个方向均匀地散射声音,而且在视觉上也很有趣,可以近似用户提供的图像。为此,我们开发了一个基于边界元方法的可微声学模拟器,并将其与可微渲染器结合视觉模型进行集成,同时对声学、外观和制造约束进行联合优化。我们产生了各种各样的例子,并制作了两个房间大小的浮雕。结果是一种有效的模拟和优化方案,用于生成声起伏,其外观可以由所提供的图像引导。
{"title":"Acoustic Reliefs","authors":"Jeremy Chew, Michal Piovarci, Kangrui Xue, Doug James, Bernd Bickel","doi":"10.1145/3763287","DOIUrl":"https://doi.org/10.1145/3763287","url":null,"abstract":"We present a framework to optimize and generate <jats:italic toggle=\"yes\">Acoustic Reliefs</jats:italic> : acoustic diffusers that not only perform well acoustically in scattering sound uniformly in all directions, but are also visually interesting and can approximate user-provided images. To this end, we develop a differentiable acoustics simulator based on the boundary element method, and integrate it with a differentiable renderer coupled with a vision model to jointly optimize for acoustics, appearance, and fabrication constraints at the same time. We generate various examples and fabricate two room-scale reliefs. The result is a validated simulation and optimization scheme for generating acoustic reliefs whose appearances can be guided by a provided image.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"128 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SZ Sequences: Binary-Based (0, 2 q )-Sequences SZ序列:基于二进制的(0,2 q)序列
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763272
Abdalla G. M. Ahmed, Matt Pharr, Victor Ostromoukhov, Hui Huang
Low-discrepancy sequences have seen widespread adoption in computer graphics thanks to the superior rates of convergence that they provide. Because rendering integrals often are comprised of products of lower-dimensional integrals, recent work has focused on developing sequences that are also well-distributed in lower-dimensional projections. To this end, we introduce a novel construction of binary-based (0, 4)-sequences; that is, progressive fully multi-stratified sequences of 4D points, and extend the idea to higher power-of-two dimensions. We further show that not only it is possible to nest lower-dimensional sequences in higher-dimensional ones—for example, embedding a (0, 2)-sequence within our (0, 4)-sequence—but that we can ensemble two (0, 2)-sequences into a (0, 4)-sequence, four (0, 4)-sequences into a (0,16)-sequence, and so on. Such sequences can provide excellent rates of convergence when integrals include lower-dimensional integration problems in 2, 4, 16,... dimensions. Our construction is based on using 2×2 block matrices as symbols to construct larger matrices that potentially generate a sequence with the target (0, s )-sequence in base s property. We describe how to search for suitable alphabets and identify two distinct, cross-related alphabets of block symbols, which we call s and z , hence SZ for the resulting family of sequences. Given the alphabets, we construct candidate generator matrices and search for valid sets of matrices. We then infer a simple recurrence formula to construct full-resolution (64-bit) matrices. Because our generator matrices are binary, they allow highly-efficient implementation using bitwise operations and can be used as a drop-in replacement for Sobol matrices in existing applications. We compare SZ sequences to state-of-the-art low discrepancy sequences, and demonstrate mean relative squared error improvements up to 1.93× in common rendering applications.
低差异序列由于其优越的收敛速度在计算机图形学中得到了广泛的应用。由于渲染积分通常由低维积分的乘积组成,因此最近的工作重点是开发在低维投影中也分布良好的序列。为此,我们引入了一种新的基于二进制的(0,4)序列构造;即四维点的渐进全多层序列,并将此思想推广到二维的更高次幂。我们进一步证明,不仅可以在高维序列中嵌套低维序列——例如,在(0,4)序列中嵌入一个(0,2)序列——而且可以将两个(0,2)序列集成到一个(0,4)序列中,将四个(0,4)序列集成到一个(0,16)序列中,等等。当积分包括2、4、16、…中的低维积分问题时,这样的序列可以提供极好的收敛速度。维度。我们的构造是基于使用2×2块矩阵作为符号来构造更大的矩阵,这些矩阵可能生成一个具有目标(0,s)序列的基s属性的序列。我们描述了如何搜索合适的字母,并识别两个不同的,交叉相关的块符号字母,我们称之为s和z,因此SZ为结果序列族。给定字母表,我们构造候选生成矩阵并搜索有效矩阵集。然后我们推导出一个简单的递归公式来构造全分辨率(64位)矩阵。由于我们的生成器矩阵是二进制的,因此它们允许使用位操作高效地实现,并且可以在现有应用程序中作为Sobol矩阵的临时替代品。我们将SZ序列与最先进的低差异序列进行了比较,并证明在常见的渲染应用程序中,平均相对平方误差提高了1.93倍。
{"title":"SZ Sequences: Binary-Based (0, 2 q )-Sequences","authors":"Abdalla G. M. Ahmed, Matt Pharr, Victor Ostromoukhov, Hui Huang","doi":"10.1145/3763272","DOIUrl":"https://doi.org/10.1145/3763272","url":null,"abstract":"Low-discrepancy sequences have seen widespread adoption in computer graphics thanks to the superior rates of convergence that they provide. Because rendering integrals often are comprised of products of lower-dimensional integrals, recent work has focused on developing sequences that are also well-distributed in lower-dimensional projections. To this end, we introduce a novel construction of binary-based (0, 4)-sequences; that is, progressive fully multi-stratified sequences of 4D points, and extend the idea to higher power-of-two dimensions. We further show that not only it is possible to nest lower-dimensional sequences in higher-dimensional ones—for example, embedding a (0, 2)-sequence within our (0, 4)-sequence—but that we can ensemble two (0, 2)-sequences into a (0, 4)-sequence, four (0, 4)-sequences into a (0,16)-sequence, and so on. Such sequences can provide excellent rates of convergence when integrals include lower-dimensional integration problems in 2, 4, 16,... dimensions. Our construction is based on using 2×2 block matrices as symbols to construct larger matrices that potentially generate a sequence with the target (0, <jats:italic toggle=\"yes\">s</jats:italic> )-sequence in base <jats:italic toggle=\"yes\">s</jats:italic> property. We describe how to search for suitable alphabets and identify two distinct, cross-related alphabets of block symbols, which we call <jats:italic toggle=\"yes\">s</jats:italic> and <jats:italic toggle=\"yes\">z</jats:italic> , hence <jats:italic toggle=\"yes\">SZ</jats:italic> for the resulting family of sequences. Given the alphabets, we construct candidate generator matrices and search for valid sets of matrices. We then infer a simple recurrence formula to construct full-resolution (64-bit) matrices. Because our generator matrices are binary, they allow highly-efficient implementation using bitwise operations and can be used as a drop-in replacement for Sobol matrices in existing applications. We compare SZ sequences to state-of-the-art low discrepancy sequences, and demonstrate mean relative squared error improvements up to 1.93× in common rendering applications.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"33 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalized Unbiased Reconstruction for Gradient-Domain Rendering 梯度域绘制的广义无偏重建
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763297
Difei Yan, Zengyu Li, Lifan Wu, Kun Xu
Gradient-domain rendering estimates image-space gradients using correlated sampling, which can be combined with color information to reconstruct smoother and less noisy images. While simple ℒ 2 reconstruction is unbiased, it often leads to visible artifacts. In contrast, most recent reconstruction methods based on learned or handcrafted techniques improve visual quality but introduce bias, leaving the development of practically unbiased reconstruction approaches relatively underexplored. In this work, we propose a generalized framework for unbiased reconstruction in gradient-domain rendering. We first derive the unbiasedness condition under a general formulation that linearly combines pixel colors and gradients. Based on this unbiasedness condition, we design a practical algorithm 1 that minimizes image variance while strictly satisfying unbiasedness. Experimental results demonstrate that our method not only guarantees unbiasedness but also achieves superior quality compared to existing unbiased and slightly biased reconstruction methods.
梯度域绘制利用相关采样估计图像空间梯度,并结合颜色信息重建更平滑、噪声更小的图像。虽然简单的重构是无偏的,但它经常导致可见的伪影。相比之下,最近大多数基于学习或手工技术的重建方法提高了视觉质量,但引入了偏见,使得实际无偏倚重建方法的发展相对缺乏探索。在这项工作中,我们提出了一个用于梯度域绘制的无偏重建的广义框架。我们首先在线性组合像素颜色和梯度的一般公式下推导出无偏性条件。基于这种无偏性条件,我们设计了一种实用的算法1,在严格满足无偏性的情况下最小化图像方差。实验结果表明,与现有的无偏和微偏重建方法相比,我们的方法不仅保证了无偏性,而且获得了更好的质量。
{"title":"Generalized Unbiased Reconstruction for Gradient-Domain Rendering","authors":"Difei Yan, Zengyu Li, Lifan Wu, Kun Xu","doi":"10.1145/3763297","DOIUrl":"https://doi.org/10.1145/3763297","url":null,"abstract":"Gradient-domain rendering estimates image-space gradients using correlated sampling, which can be combined with color information to reconstruct smoother and less noisy images. While simple ℒ <jats:sub>2</jats:sub> reconstruction is unbiased, it often leads to visible artifacts. In contrast, most recent reconstruction methods based on learned or handcrafted techniques improve visual quality but introduce bias, leaving the development of practically unbiased reconstruction approaches relatively underexplored. In this work, we propose a generalized framework for unbiased reconstruction in gradient-domain rendering. We first derive the unbiasedness condition under a general formulation that linearly combines pixel colors and gradients. Based on this unbiasedness condition, we design a practical algorithm <jats:sup>1</jats:sup> that minimizes image variance while strictly satisfying unbiasedness. Experimental results demonstrate that our method not only guarantees unbiasedness but also achieves superior quality compared to existing unbiased and slightly biased reconstruction methods.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"203 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeuVAS: Neural Implicit Surfaces for Variational Shape Modeling 用于变分形状建模的神经隐式曲面
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763331
Pengfei Wang, Qiujie Dong, Fangtian Liang, Hao Pan, Lei Yang, Congyi Zhang, Guying Lin, Caiming Zhang, Yuanfeng Zhou, Changhe Tu, Shiqing Xin, Alla Sheffer, Xin Li, Wenping Wang
Neural implicit shape representation has drawn significant attention in recent years due to its smoothness, differentiability, and topological flexibility. However, directly modeling the shape of a neural implicit surface, especially as the zero-level set of a neural signed distance function (SDF), with sparse geometric control is still a challenging task. Sparse input shape control typically includes 3D curve networks or, more generally, 3D curve sketches, which are unstructured and cannot be connected to form a curve network, and therefore more difficult to deal with. While 3D curve networks or curve sketches provide intuitive shape control, their sparsity and varied topology pose challenges in generating high-quality surfaces to meet such curve constraints. In this paper, we propose NeuVAS, a variational approach to shape modeling using neural implicit surfaces constrained under sparse input shape control, including unstructured 3D curve sketches as well as connected 3D curve networks. Specifically, we introduce a smoothness term based on a functional of surface curvatures to minimize shape variation of the zero-level set surface of a neural SDF. We also develop a new technique to faithfully model G 0 sharp feature curves as specified in the input curve sketches. Comprehensive comparisons with the state-of-the-art methods demonstrate the significant advantages of our method.
神经隐式形状表示由于其平滑性、可微性和拓扑灵活性近年来引起了广泛的关注。然而,利用稀疏几何控制直接建模神经隐式曲面的形状,特别是作为神经符号距离函数(SDF)的零水平集,仍然是一个具有挑战性的任务。稀疏输入形状控制通常包括三维曲线网络,或者更一般地包括三维曲线草图,它们是非结构化的,不能连接形成曲线网络,因此更难处理。虽然3D曲线网络或曲线草图提供了直观的形状控制,但它们的稀疏性和各种拓扑结构对生成高质量表面以满足此类曲线约束提出了挑战。在本文中,我们提出了NeuVAS,这是一种使用稀疏输入形状控制约束下的神经隐式曲面进行形状建模的变分方法,包括非结构化的三维曲线草图和连接的三维曲线网络。具体来说,我们引入了一个基于曲面曲率函数的平滑项,以最小化神经SDF的零水平集曲面的形状变化。我们还开发了一种新技术来忠实地模拟g0尖锐特征曲线,如输入曲线草图中指定的那样。与最先进的方法的综合比较表明了我们的方法的显著优势。
{"title":"NeuVAS: Neural Implicit Surfaces for Variational Shape Modeling","authors":"Pengfei Wang, Qiujie Dong, Fangtian Liang, Hao Pan, Lei Yang, Congyi Zhang, Guying Lin, Caiming Zhang, Yuanfeng Zhou, Changhe Tu, Shiqing Xin, Alla Sheffer, Xin Li, Wenping Wang","doi":"10.1145/3763331","DOIUrl":"https://doi.org/10.1145/3763331","url":null,"abstract":"Neural implicit shape representation has drawn significant attention in recent years due to its smoothness, differentiability, and topological flexibility. However, directly modeling the shape of a neural implicit surface, especially as the zero-level set of a neural signed distance function (SDF), with sparse geometric control is still a challenging task. Sparse input shape control typically includes 3D curve networks or, more generally, 3D curve sketches, which are unstructured and cannot be connected to form a curve network, and therefore more difficult to deal with. While 3D curve networks or curve sketches provide intuitive shape control, their sparsity and varied topology pose challenges in generating high-quality surfaces to meet such curve constraints. In this paper, we propose NeuVAS, a variational approach to shape modeling using neural implicit surfaces constrained under sparse input shape control, including unstructured 3D curve sketches as well as connected 3D curve networks. Specifically, we introduce a smoothness term based on a functional of surface curvatures to minimize shape variation of the zero-level set surface of a neural SDF. We also develop a new technique to faithfully model <jats:italic toggle=\"yes\">G</jats:italic> <jats:sup>0</jats:sup> sharp feature curves as specified in the input curve sketches. Comprehensive comparisons with the state-of-the-art methods demonstrate the significant advantages of our method.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"1 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AnySplat: Feed-forward 3D Gaussian Splatting from Unconstrained Views AnySplat:前馈三维高斯溅射从无约束视图
IF 6.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-12-04 DOI: 10.1145/3763326
Lihan Jiang, Yucheng Mao, Linning Xu, Tao Lu, Kerui Ren, Yichen Jin, Xudong Xu, Mulin Yu, Jiangmiao Pang, Feng Zhao, Dahua Lin, Bo Dai
We introduce AnySplat, a feed-forward network for novel-view synthesis from uncalibrated image collections. In contrast to traditional neural-rendering pipelines that demand known camera poses and per-scene optimization, or recent feed-forward methods that buckle under the computational weight of dense views—our model predicts everything in one shot. A single forward pass yields a set of 3D Gaussian primitives encoding both scene geometry and appearance, and the corresponding camera intrinsics and extrinsics for each input image. This unified design scales effortlessly to casually captured, multi-view datasets without any pose annotations. In extensive zero-shot evaluations, AnySplat matches the quality of pose-aware baselines in both sparse- and dense-view scenarios while surpassing existing pose-free approaches. Moreover, it greatly reduces rendering latency compared to optimization-based neural fields, bringing real-time novel-view synthesis within reach for unconstrained capture settings. Project page: https://city-super.github.io/anysplat/.
我们介绍了AnySplat,一个前馈网络,用于从未校准的图像集合中合成新视图。传统的神经渲染管道需要已知的相机姿势和每个场景的优化,或者最近的前馈方法在密集视图的计算权重下弯曲,与之相反,我们的模型在一个镜头中预测一切。单个前向传递产生一组3D高斯原语,编码场景几何形状和外观,以及每个输入图像对应的相机内部和外部特征。这种统一的设计可以毫不费力地扩展到随意捕获的多视图数据集,而无需任何姿态注释。在广泛的零射击评估中,AnySplat在稀疏和密集视图场景中匹配姿态感知基线的质量,同时超越现有的无姿态方法。此外,与基于优化的神经场相比,它大大减少了渲染延迟,使实时新视图合成能够实现无约束的捕获设置。项目页面:https://city-super.github.io/anysplat/。
{"title":"AnySplat: Feed-forward 3D Gaussian Splatting from Unconstrained Views","authors":"Lihan Jiang, Yucheng Mao, Linning Xu, Tao Lu, Kerui Ren, Yichen Jin, Xudong Xu, Mulin Yu, Jiangmiao Pang, Feng Zhao, Dahua Lin, Bo Dai","doi":"10.1145/3763326","DOIUrl":"https://doi.org/10.1145/3763326","url":null,"abstract":"We introduce AnySplat, a feed-forward network for novel-view synthesis from uncalibrated image collections. In contrast to traditional neural-rendering pipelines that demand known camera poses and per-scene optimization, or recent feed-forward methods that buckle under the computational weight of dense views—our model predicts everything in one shot. A single forward pass yields a set of 3D Gaussian primitives encoding both scene geometry and appearance, and the corresponding camera intrinsics and extrinsics for each input image. This unified design scales effortlessly to casually captured, multi-view datasets without any pose annotations. In extensive zero-shot evaluations, AnySplat matches the quality of pose-aware baselines in both sparse- and dense-view scenarios while surpassing existing pose-free approaches. Moreover, it greatly reduces rendering latency compared to optimization-based neural fields, bringing real-time novel-view synthesis within reach for unconstrained capture settings. Project page: https://city-super.github.io/anysplat/.","PeriodicalId":50913,"journal":{"name":"ACM Transactions on Graphics","volume":"28 1","pages":""},"PeriodicalIF":6.2,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1