首页 > 最新文献

Computers & Graphics-Uk最新文献

英文 中文
Retinal pre-filtering for light field displays 光场显示器的视网膜预过滤功能
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-06 DOI: 10.1016/j.cag.2024.104033

The display coefficients that produce the signal emitted by a light field display are usually calculated to approximate the radiance over a set of sampled rays in the light field space. However, not all information contained in the light field signal is of equal importance to an observer. We propose a retinal pre-filtering of the light field samples that takes into account the image formation process of the observer to determine display coefficients that will ultimately produce better retinal images for a range of focus distances. We demonstrate a significant increase in image definition without changing the display resolution.

产生光场显示屏所发出信号的显示系数通常是根据光场空间中一组采样光线的辐射度来计算的。然而,并非光场信号中包含的所有信息对观察者都同样重要。我们建议对光场样本进行视网膜预过滤,将观察者的图像形成过程考虑在内,以确定显示系数,最终在一定的聚焦距离范围内产生更好的视网膜图像。我们展示了在不改变显示分辨率的情况下图像清晰度的显著提高。
{"title":"Retinal pre-filtering for light field displays","authors":"","doi":"10.1016/j.cag.2024.104033","DOIUrl":"10.1016/j.cag.2024.104033","url":null,"abstract":"<div><p>The display coefficients that produce the signal emitted by a light field display are usually calculated to approximate the radiance over a set of sampled rays in the light field space. However, not all information contained in the light field signal is of equal importance to an observer. We propose a retinal pre-filtering of the light field samples that takes into account the image formation process of the observer to determine display coefficients that will ultimately produce better retinal images for a range of focus distances. We demonstrate a significant increase in image definition without changing the display resolution.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0097849324001687/pdfft?md5=a3ada2f14da0a4ee885b3020bef4c154&pid=1-s2.0-S0097849324001687-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142040308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SP-SeaNeRF: Underwater Neural Radiance Fields with strong scattering perception SP-SeaNeRF:具有强散射感知的水下神经辐射场
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-06 DOI: 10.1016/j.cag.2024.104025

Water and light interactions cause color shifts and blurring in underwater images, while dynamic underwater illumination further disrupts scene consistency, resulting in poor performance of optical image-based reconstruction methods underwater. Although Neural Radiance Fields (NeRF) can describe aqueous medium through volume rendering, applying it directly underwater may induce artifacts and floaters. We propose SP-SeaNeRF, which uses micro MLP to predict water column parameters and simulates the degradation process as a combination of real colors and scattered colors in underwater images, enhancing the model’s perception of scattering. We use illumination embedding vectors to learn the illumination bias within the images, in order to prevent dynamic illumination from disrupting scene consistency. We have introduced a novel sampling module, which focuses on maximum weight points, effectively improves training and inference speed. We evaluated our proposed method on SeaThru-NeRF and Neuralsea underwater datasets. The experimental results show that our method exhibits superior underwater color restoration ability, outperforming existing underwater NeRF in terms of reconstruction quality and speed.

水和光的相互作用会导致水下图像的颜色偏移和模糊,而水下的动态光照会进一步破坏场景的一致性,从而导致基于光学图像的重建方法在水下的性能不佳。虽然神经辐射场(NeRF)可以通过体积渲染来描述水介质,但在水下直接应用可能会产生伪影和漂浮物。我们提出了 SP-SeaNeRF,它使用微型 MLP 来预测水柱参数,并将降解过程模拟为水下图像中真实颜色和散射颜色的组合,增强了模型对散射的感知。我们使用光照嵌入向量来学习图像内的光照偏差,以防止动态光照破坏场景一致性。我们引入了一个新颖的采样模块,该模块侧重于最大权重点,有效提高了训练和推理速度。我们在 SeaThru-NeRF 和 Neuralsea 水下数据集上评估了我们提出的方法。实验结果表明,我们的方法具有卓越的水下色彩还原能力,在重建质量和速度方面都优于现有的水下 NeRF。
{"title":"SP-SeaNeRF: Underwater Neural Radiance Fields with strong scattering perception","authors":"","doi":"10.1016/j.cag.2024.104025","DOIUrl":"10.1016/j.cag.2024.104025","url":null,"abstract":"<div><p>Water and light interactions cause color shifts and blurring in underwater images, while dynamic underwater illumination further disrupts scene consistency, resulting in poor performance of optical image-based reconstruction methods underwater. Although Neural Radiance Fields (NeRF) can describe aqueous medium through volume rendering, applying it directly underwater may induce artifacts and floaters. We propose SP-SeaNeRF, which uses micro MLP to predict water column parameters and simulates the degradation process as a combination of real colors and scattered colors in underwater images, enhancing the model’s perception of scattering. We use illumination embedding vectors to learn the illumination bias within the images, in order to prevent dynamic illumination from disrupting scene consistency. We have introduced a novel sampling module, which focuses on maximum weight points, effectively improves training and inference speed. We evaluated our proposed method on SeaThru-NeRF and Neuralsea underwater datasets. The experimental results show that our method exhibits superior underwater color restoration ability, outperforming existing underwater NeRF in terms of reconstruction quality and speed.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142007101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surveying the evolution of virtual humans expressiveness toward real humans 调查虚拟人的表现力向真人演变的情况
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-06 DOI: 10.1016/j.cag.2024.104034

Virtual Humans (VHs) emerged over 50 years ago and have since experienced notable advancements. Initially, developing and animating VHs posed significant challenges. However, modern technology, both commercially available and freely accessible, has democratized the creation and animation processes, making them more accessible to users, programmers, and designers. These advancements have led to the replication of authentic traits and behaviors of real actors in VHs, resulting in visually convincing and behaviorally lifelike characters. As a consequence, many research areas arise as functional VH technologies. This paper explored the evolution of four areas and emerging trends related to VHs while examining some of the implications and challenges posed by highly realistic characters within these domains.

虚拟人(VHs)出现于 50 多年前,此后经历了显著的进步。最初,虚拟人的开发和动画制作面临巨大挑战。然而,现代技术,无论是商业化的还是免费获取的,都使创建和动画制作过程民主化,使用户、程序员和设计师更容易获得这些技术。这些进步使得虚拟人物能够复制真实演员的特征和行为,从而创造出视觉上令人信服、行为上栩栩如生的人物形象。因此,出现了许多作为功能性虚拟人物技术的研究领域。本文探讨了与虚拟人物相关的四个领域和新兴趋势的演变,同时研究了高度逼真的人物在这些领域中带来的一些影响和挑战。
{"title":"Surveying the evolution of virtual humans expressiveness toward real humans","authors":"","doi":"10.1016/j.cag.2024.104034","DOIUrl":"10.1016/j.cag.2024.104034","url":null,"abstract":"<div><p>Virtual Humans (VHs) emerged over 50 years ago and have since experienced notable advancements. Initially, developing and animating VHs posed significant challenges. However, modern technology, both commercially available and freely accessible, has democratized the creation and animation processes, making them more accessible to users, programmers, and designers. These advancements have led to the replication of authentic traits and behaviors of real actors in VHs, resulting in visually convincing and behaviorally lifelike characters. As a consequence, many research areas arise as functional VH technologies. This paper explored the evolution of four areas and emerging trends related to VHs while examining some of the implications and challenges posed by highly realistic characters within these domains.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141942921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the relationships between user behaviors and tracking factors on task performance and trust in augmented reality 研究增强现实技术中用户行为和跟踪因素与任务执行和信任之间的关系
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-06 DOI: 10.1016/j.cag.2024.104035

This research paper explores the impact of augmented reality (AR) tracking characteristics, specifically an AR head-worn display’s tracking registration accuracy and precision, on users’ spatial abilities and subjective perceptions of trust in and reliance on the technology. Our study aims to clarify the relationships between user performance and the different behaviors users may employ based on varying degrees of trust in and reliance on AR. Our controlled experimental setup used a 360° field-of-regard search-and-selection task and combines the immersive aspects of a CAVE-like environment with AR overlays viewed with a head-worn display.

We investigated three levels of simulated AR tracking errors in terms of both accuracy and precision (+0°, +1°, +2°). We controlled for four user task behaviors that correspond to different levels of trust in and reliance on an AR system: AR-Only (only relying on AR), AR-First (prioritizing AR over real world), Real-Only (only relying on real world), and Real-First (prioritizing real world over AR). By controlling for these behaviors, our results showed that even small amounts of AR tracking errors had noticeable effects on users’ task performance, especially if they relied completely on the AR cues (AR-Only). Our results link AR tracking characteristics with user behavior, highlighting the importance of understanding these elements to improve AR technology and user satisfaction.

本研究论文探讨了增强现实(AR)追踪特性(特别是 AR 头戴式显示器的追踪注册准确性和精确度)对用户空间能力以及对该技术信任和依赖的主观感受的影响。我们的研究旨在阐明用户表现与用户基于对 AR 不同程度的信任和依赖而采取的不同行为之间的关系。我们的受控实验设置使用了 360° 视场搜索和选择任务,并将类似 CAVE 的沉浸式环境与通过头戴式显示器查看的 AR 叠加效果相结合。我们对用户的四种任务行为进行了控制,这些行为与对 AR 系统的不同信任和依赖程度相对应:AR-Only(仅依赖 AR)、AR-First(优先考虑 AR 而非真实世界)、Real-Only(仅依赖真实世界)和 Real-First(优先考虑真实世界而非 AR)。通过对这些行为进行控制,我们的结果表明,即使是少量的 AR 跟踪错误也会对用户的任务表现产生明显影响,尤其是在用户完全依赖 AR 提示的情况下(仅依赖 AR)。我们的研究结果将 AR 跟踪特征与用户行为联系起来,强调了了解这些因素对于改进 AR 技术和提高用户满意度的重要性。
{"title":"Investigating the relationships between user behaviors and tracking factors on task performance and trust in augmented reality","authors":"","doi":"10.1016/j.cag.2024.104035","DOIUrl":"10.1016/j.cag.2024.104035","url":null,"abstract":"<div><p>This research paper explores the impact of augmented reality (AR) tracking characteristics, specifically an AR head-worn display’s tracking registration accuracy and precision, on users’ spatial abilities and subjective perceptions of trust in and reliance on the technology. Our study aims to clarify the relationships between user performance and the different behaviors users may employ based on varying degrees of trust in and reliance on AR. Our controlled experimental setup used a 360° field-of-regard search-and-selection task and combines the immersive aspects of a CAVE-like environment with AR overlays viewed with a head-worn display.</p><p>We investigated three levels of simulated AR tracking errors in terms of both accuracy and precision (+0°, +1°, +2°). We controlled for four user task behaviors that correspond to different levels of trust in and reliance on an AR system: <em>AR-Only</em> (only relying on AR), <em>AR-First</em> (prioritizing AR over real world), <em>Real-Only</em> (only relying on real world), and <em>Real-First</em> (prioritizing real world over AR). By controlling for these behaviors, our results showed that even small amounts of AR tracking errors had noticeable effects on users’ task performance, especially if they relied completely on the AR cues (AR-Only). Our results link AR tracking characteristics with user behavior, highlighting the importance of understanding these elements to improve AR technology and user satisfaction.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visually communicating pathological changes: A case study on the effectiveness of phong versus outline shading 视觉传达病理变化:关于 phong 与轮廓阴影效果的案例研究
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-05 DOI: 10.1016/j.cag.2024.104023

In this paper, we investigate the suitability of different visual representations of pathological growth and shrinkage using surface models of intracranial aneurysms and liver tumors. By presenting complex medical information in a visually accessible manner, audiences can better understand and comprehend the progression of pathological structures. Previous work in medical visualization provides an extensive design space for visualizing medical image data. However, determining which visualization techniques are appropriate for a general audience has not been thoroughly investigated.

We conducted a user study (n = 40) to evaluate different visual representations in terms of their suitability for solving tasks and their aesthetics. We created surface models representing the evolution of pathological structures over multiple discrete time steps and visualized them using illumination-based and illustrative techniques. Our results indicate that users’ aesthetic preferences largely coincide with their preferred visualization technique for task-solving purposes. In general, the illumination-based technique has been preferred to the illustrative technique, but the latter offers great potential for increasing the accessibility of visualizations to users with color vision deficiencies.

在本文中,我们利用颅内动脉瘤和肝脏肿瘤的表面模型,研究了病理生长和萎缩的不同视觉表现形式的适用性。通过以可视化的方式呈现复杂的医学信息,受众可以更好地理解和掌握病理结构的发展过程。以往的医学可视化工作为医学图像数据的可视化提供了广泛的设计空间。我们进行了一项用户研究(n = 40),从解决任务的适用性和美学角度对不同的可视化表现形式进行评估。我们创建了代表病理结构在多个离散时间步骤中演变的曲面模型,并使用基于照明和说明的技术将其可视化。我们的研究结果表明,用户的审美偏好在很大程度上与他们为解决任务而偏好的可视化技术相吻合。一般来说,基于照明的技术比插图技术更受青睐,但后者在提高色觉缺陷用户的可视化可及性方面具有巨大潜力。
{"title":"Visually communicating pathological changes: A case study on the effectiveness of phong versus outline shading","authors":"","doi":"10.1016/j.cag.2024.104023","DOIUrl":"10.1016/j.cag.2024.104023","url":null,"abstract":"<div><p>In this paper, we investigate the suitability of different visual representations of pathological growth and shrinkage using surface models of intracranial aneurysms and liver tumors. By presenting complex medical information in a visually accessible manner, audiences can better understand and comprehend the progression of pathological structures. Previous work in medical visualization provides an extensive design space for visualizing medical image data. However, determining which visualization techniques are appropriate for a general audience has not been thoroughly investigated.</p><p>We conducted a user study (n = 40) to evaluate different visual representations in terms of their suitability for solving tasks and their aesthetics. We created surface models representing the evolution of pathological structures over multiple discrete time steps and visualized them using illumination-based and illustrative techniques. Our results indicate that users’ aesthetic preferences largely coincide with their preferred visualization technique for task-solving purposes. In general, the illumination-based technique has been preferred to the illustrative technique, but the latter offers great potential for increasing the accessibility of visualizations to users with color vision deficiencies.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0097849324001584/pdfft?md5=290698cd5eeb6b5b6aca798a4452f2fb&pid=1-s2.0-S0097849324001584-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142002321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Note Computers & Graphics Issue 122 编者按 《计算机与图形》第 122 期
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-01 DOI: 10.1016/j.cag.2024.104032
{"title":"Editorial Note Computers & Graphics Issue 122","authors":"","doi":"10.1016/j.cag.2024.104032","DOIUrl":"10.1016/j.cag.2024.104032","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142044340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to the special section on 3D object retrieval 2023 symposium (3DOR2023) 2023 年 3D 物体检索专题讨论会(3DOR2023)特别部分前言
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-01 DOI: 10.1016/j.cag.2023.12.007
{"title":"Foreword to the special section on 3D object retrieval 2023 symposium (3DOR2023)","authors":"","doi":"10.1016/j.cag.2023.12.007","DOIUrl":"10.1016/j.cag.2023.12.007","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138683131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foreword to the special section on SIBGRAPI 2023 SIBGRAPI 2023 特别章节前言
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-01 DOI: 10.1016/j.cag.2023.08.031
{"title":"Foreword to the special section on SIBGRAPI 2023","authors":"","doi":"10.1016/j.cag.2023.08.031","DOIUrl":"10.1016/j.cag.2023.08.031","url":null,"abstract":"","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129764432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From aerial LiDAR point clouds to multiscale urban representation levels by a parametric resampling 通过参数重采样从航空激光雷达点云到多尺度城市表示水平
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-31 DOI: 10.1016/j.cag.2024.104022

Urban simulations that involve disaster prevention, urban design, and assisted navigation heavily rely on urban geometric models. While large urban areas need a lot of time to be acquired terrestrially, government organizations have already conducted massive aerial LiDAR surveys, some even at the national level. This work aims to provide a pipeline for extracting multi-scale point clouds from 2D building footprints and airborne LiDAR data, which depends on whether the points represent buildings, vegetation, or ground. We denoise the roof slopes, match the vegetation, and roughly recreate the building façades frequently hidden to aerial acquisition using a parametric representation of geometric primitives. We then carry out multiple-scale samplings of the urban geometry until a 3D urban representation can be achieved because we annotate the new version of the original point cloud with the parametric equations representing each part. We mainly tested our methodology in a real-world setting – the city of Genoa – which includes historical buildings and is heavily characterized by irregular ground slopes. Moreover, we present the results of urban reconstruction on part of two other cities, Matera, which has a complex morphology like Genoa, and Rotterdam.

涉及防灾、城市设计和辅助导航的城市模拟在很大程度上依赖于城市几何模型。虽然大面积城市区域的地面采集需要大量时间,但政府组织已经开展了大规模的航空激光雷达勘测,有些甚至是国家级的。这项工作旨在提供一个从二维建筑足迹和航空激光雷达数据中提取多尺度点云的管道,这取决于点代表的是建筑、植被还是地面。我们对屋顶坡度进行去噪处理,匹配植被,并使用几何基元的参数表示法大致重现航空采集中经常隐藏的建筑立面。然后,我们对城市几何图形进行多尺度采样,直到可以实现三维城市表示,因为我们在新版本的原始点云上标注了表示每个部分的参数方程。我们主要在热那亚市的实际环境中测试了我们的方法,该城市包括历史建筑,地面坡度不规则。此外,我们还展示了在另外两个城市的部分地区进行城市重建的结果,这两个城市是马泰拉(与热那亚一样具有复杂的形态)和鹿特丹。
{"title":"From aerial LiDAR point clouds to multiscale urban representation levels by a parametric resampling","authors":"","doi":"10.1016/j.cag.2024.104022","DOIUrl":"10.1016/j.cag.2024.104022","url":null,"abstract":"<div><p>Urban simulations that involve disaster prevention, urban design, and assisted navigation heavily rely on urban geometric models. While large urban areas need a lot of time to be acquired terrestrially, government organizations have already conducted massive aerial LiDAR surveys, some even at the national level. This work aims to provide a pipeline for extracting multi-scale point clouds from 2D building footprints and airborne LiDAR data, which depends on whether the points represent buildings, vegetation, or ground. We denoise the roof slopes, match the vegetation, and roughly recreate the building façades frequently hidden to aerial acquisition using a parametric representation of geometric primitives. We then carry out multiple-scale samplings of the urban geometry until a 3D urban representation can be achieved because we annotate the new version of the original point cloud with the parametric equations representing each part. We mainly tested our methodology in a real-world setting – the city of Genoa – which includes historical buildings and is heavily characterized by irregular ground slopes. Moreover, we present the results of urban reconstruction on part of two other cities, Matera, which has a complex morphology like Genoa, and Rotterdam.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0097849324001572/pdfft?md5=a617708d0acaf24ecd321d09a5821721&pid=1-s2.0-S0097849324001572-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141942922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Binary segmentation of relief patterns on point clouds 点云浮雕图案的二进制分割
IF 2.5 4区 计算机科学 Q2 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-31 DOI: 10.1016/j.cag.2024.104020

Analysis of 3D textures, also known as relief patterns is a challenging task that requires separating repetitive surface patterns from the underlying global geometry. Existing works classify entire surfaces based on one or a few patterns by extracting ad-hoc statistical properties. Unfortunately, these methods are not suitable for objects with multiple geometric textures and perform poorly on more complex shapes. In this paper, we propose a neural network for binary segmentation to infer per-point labels based on the presence of surface relief patterns. We evaluated the proposed architecture on a high resolution point cloud dataset, surpassing the state-of-the-art, while maintaining memory and computation efficiency.

分析三维纹理(也称浮雕图案)是一项具有挑战性的任务,需要将重复的表面图案与底层的全局几何图形分离开来。现有研究通过提取临时统计属性,根据一种或几种图案对整个表面进行分类。遗憾的是,这些方法不适用于具有多种几何纹理的物体,而且在处理更复杂的形状时表现不佳。在本文中,我们提出了一种用于二元分割的神经网络,可根据表面浮雕图案的存在推断每点标签。我们在高分辨率点云数据集上对所提出的架构进行了评估,结果超过了最先进的架构,同时保持了内存和计算效率。
{"title":"Binary segmentation of relief patterns on point clouds","authors":"","doi":"10.1016/j.cag.2024.104020","DOIUrl":"10.1016/j.cag.2024.104020","url":null,"abstract":"<div><p>Analysis of 3D textures, also known as relief patterns is a challenging task that requires separating repetitive surface patterns from the underlying global geometry. Existing works classify entire surfaces based on one or a few patterns by extracting ad-hoc statistical properties. Unfortunately, these methods are not suitable for objects with multiple geometric textures and perform poorly on more complex shapes. In this paper, we propose a neural network for binary segmentation to infer per-point labels based on the presence of surface relief patterns. We evaluated the proposed architecture on a high resolution point cloud dataset, surpassing the state-of-the-art, while maintaining memory and computation efficiency.</p></div>","PeriodicalId":50628,"journal":{"name":"Computers & Graphics-Uk","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0097849324001559/pdfft?md5=2a3d2170481b5dae4c7f729baa4b2914&pid=1-s2.0-S0097849324001559-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141942923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers & Graphics-Uk
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1