首页 > 最新文献

Computer Graphics World最新文献

英文 中文
GPU accelerated scalable parallel coordinates plots GPU加速可伸缩的平行坐标图
Q4 Computer Science Pub Date : 2022-11-01 DOI: 10.2139/ssrn.4188415
Josef Stumpfegger, Kevin Höhlein, George E. Craig, R. Westermann
{"title":"GPU accelerated scalable parallel coordinates plots","authors":"Josef Stumpfegger, Kevin Höhlein, George E. Craig, R. Westermann","doi":"10.2139/ssrn.4188415","DOIUrl":"https://doi.org/10.2139/ssrn.4188415","url":null,"abstract":"","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"115 1","pages":"111-120"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79089387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Developable mesh segmentation by detecting curve-like features on Gauss images 基于高斯图像曲线特征的可展开网格分割
Q4 Computer Science Pub Date : 2022-10-01 DOI: 10.2139/ssrn.4126876
Zheng Zeng, Xiaohong Jia, L. Shen, Pengbo Bo
{"title":"Developable mesh segmentation by detecting curve-like features on Gauss images","authors":"Zheng Zeng, Xiaohong Jia, L. Shen, Pengbo Bo","doi":"10.2139/ssrn.4126876","DOIUrl":"https://doi.org/10.2139/ssrn.4126876","url":null,"abstract":"","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"16 1","pages":"42-54"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82024809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure simplification of planar quadrilateral meshes 平面四边形网格的结构简化
Q4 Computer Science Pub Date : 2022-10-01 DOI: 10.2139/ssrn.4167992
Muhammad Naeem Akram, Kaoji Xu, Guoning Chen
In this paper, we present a structure simplification framework for planar all-quad meshes with open boundaries. Our simplification framework can handle quad meshes with complex structures (e.g., quad meshes obtained via Catmull-Clark subdivision of the triangle meshes) to produce simpler meshes while preserving the boundary features. To achieve that, we introduce a set of separatrix-based semi-global operations and combine them with existing local operations to develop a new simplification framework. Additionally, we organize and order the individual simplification operations into groups and employ ranking strategies for each group to sort these operations to produce quad meshes with better quality and simpler structure. We provide a comprehensive evaluation of our framework using different input parameters on a number of representative planar quad meshes with various boundary configurations. To demonstrate the advantages of our method, we compare it with a few existing frameworks. Our comparison shows that our simplification framework usually produces simpler structure with faster computation than the state-of-the-art methods. © 2023 Elsevier B.V. All rights reserved.
本文提出了一种开放边界平面全四边形网格的结构简化框架。我们的简化框架可以处理具有复杂结构的四边形网格(例如,通过三角形网格的Catmull-Clark细分获得的四边形网格),在保留边界特征的同时生成更简单的网格。为了实现这一目标,我们引入了一组基于分离矩阵的半全局操作,并将它们与现有的局部操作结合起来,开发了一个新的简化框架。此外,我们将单个化简操作组织并排序成组,并对每组使用排序策略对这些操作进行排序,从而生成质量更好、结构更简单的四元网格。我们在许多具有不同边界配置的代表性平面四边形网格上使用不同的输入参数对我们的框架进行了全面的评估。为了证明我们的方法的优点,我们将其与一些现有框架进行了比较。我们的比较表明,我们的简化框架通常比最先进的方法产生更简单的结构和更快的计算速度。©2023 Elsevier B.V.版权所有
{"title":"Structure simplification of planar quadrilateral meshes","authors":"Muhammad Naeem Akram, Kaoji Xu, Guoning Chen","doi":"10.2139/ssrn.4167992","DOIUrl":"https://doi.org/10.2139/ssrn.4167992","url":null,"abstract":"In this paper, we present a structure simplification framework for planar all-quad meshes with open boundaries. Our simplification framework can handle quad meshes with complex structures (e.g., quad meshes obtained via Catmull-Clark subdivision of the triangle meshes) to produce simpler meshes while preserving the boundary features. To achieve that, we introduce a set of separatrix-based semi-global operations and combine them with existing local operations to develop a new simplification framework. Additionally, we organize and order the individual simplification operations into groups and employ ranking strategies for each group to sort these operations to produce quad meshes with better quality and simpler structure. We provide a comprehensive evaluation of our framework using different input parameters on a number of representative planar quad meshes with various boundary configurations. To demonstrate the advantages of our method, we compare it with a few existing frameworks. Our comparison shows that our simplification framework usually produces simpler structure with faster computation than the state-of-the-art methods. © 2023 Elsevier B.V. All rights reserved.","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"95 1","pages":"1-14"},"PeriodicalIF":0.0,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80288173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Understanding reinforcement learned crowds 理解强化学习群体
Q4 Computer Science Pub Date : 2022-09-19 DOI: 10.48550/arXiv.2209.09344
Ariel Kwiatkowski, Vicky S. Kalogeiton, Julien Pettr'e, Marie-Paule Cani
Simulating trajectories of virtual crowds is a commonly encountered task in Computer Graphics. Several recent works have applied Reinforcement Learning methods to animate virtual agents, however they often make different design choices when it comes to the fundamental simulation setup. Each of these choices comes with a reasonable justification for its use, so it is not obvious what is their real impact, and how they affect the results. In this work, we analyze some of these arbitrary choices in terms of their impact on the learning performance, as well as the quality of the resulting simulation measured in terms of the energy efficiency. We perform a theoretical analysis of the properties of the reward function design, and empirically evaluate the impact of using certain observation and action spaces on a variety of scenarios, with the reward function and energy usage as metrics. We show that directly using the neighboring agents' information as observation generally outperforms the more widely used raycasting. Similarly, using nonholonomic controls with egocentric observations tends to produce more efficient behaviors than holonomic controls with absolute observations. Each of these choices has a significant, and potentially nontrivial impact on the results, and so researchers should be mindful about choosing and reporting them in their work.
模拟虚拟人群的运动轨迹是计算机图形学中经常遇到的问题。最近的一些作品已经应用了强化学习方法来动画虚拟代理,但是当涉及到基本的模拟设置时,它们通常会做出不同的设计选择。每种选择都有其合理的使用理由,因此它们的真正影响是什么,以及它们如何影响结果并不明显。在这项工作中,我们分析了其中一些任意选择对学习性能的影响,以及根据能源效率测量的结果模拟的质量。我们对奖励函数设计的属性进行了理论分析,并以奖励函数和能量使用为指标,实证评估了在各种场景中使用特定观察和行动空间的影响。我们表明,直接使用相邻代理的信息作为观测通常优于更广泛使用的射线投射。同样,使用以自我为中心观察的非完整控制往往比使用绝对观察的完整控制产生更有效的行为。这些选择中的每一个都对结果有重要的,潜在的重要影响,因此研究人员应该注意在他们的工作中选择和报告它们。
{"title":"Understanding reinforcement learned crowds","authors":"Ariel Kwiatkowski, Vicky S. Kalogeiton, Julien Pettr'e, Marie-Paule Cani","doi":"10.48550/arXiv.2209.09344","DOIUrl":"https://doi.org/10.48550/arXiv.2209.09344","url":null,"abstract":"Simulating trajectories of virtual crowds is a commonly encountered task in Computer Graphics. Several recent works have applied Reinforcement Learning methods to animate virtual agents, however they often make different design choices when it comes to the fundamental simulation setup. Each of these choices comes with a reasonable justification for its use, so it is not obvious what is their real impact, and how they affect the results. In this work, we analyze some of these arbitrary choices in terms of their impact on the learning performance, as well as the quality of the resulting simulation measured in terms of the energy efficiency. We perform a theoretical analysis of the properties of the reward function design, and empirically evaluate the impact of using certain observation and action spaces on a variety of scenarios, with the reward function and energy usage as metrics. We show that directly using the neighboring agents' information as observation generally outperforms the more widely used raycasting. Similarly, using nonholonomic controls with egocentric observations tends to produce more efficient behaviors than holonomic controls with absolute observations. Each of these choices has a significant, and potentially nontrivial impact on the results, and so researchers should be mindful about choosing and reporting them in their work.","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"37 1","pages":"28-37"},"PeriodicalIF":0.0,"publicationDate":"2022-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90233882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Underwater enhancement based on a self-learning strategy and attention mechanism for high-intensity regions 基于自学习策略和高强度区域注意机制的水下增强
Q4 Computer Science Pub Date : 2022-08-01 DOI: 10.48550/arXiv.2208.03319
Claudio D. Mello, Bryan U. Moreira, Paulo J. O. Evald, Paulo L. J. Drews-Jr, Silvia S. Botelho
Images acquired during underwater activities suffer from environmental properties of the water, such as turbidity and light attenuation. These phenomena cause color distortion, blurring, and contrast reduction. In addition, irregular ambient light distribution causes color channel unbalance and regions with high-intensity pixels. Recent works related to underwater image enhancement, and based on deep learning approaches, tackle the lack of paired datasets generating synthetic ground-truth. In this paper, we present a self-supervised learning methodology for underwater image enhancement based on deep learning that requires no paired datasets. The proposed method estimates the degradation present in underwater images. Besides, an autoencoder reconstructs this image, and its output image is degraded using the estimated degradation information. Therefore, the strategy replaces the output image with the degraded version in the loss function during the training phase. This procedure textit{misleads} the neural network that learns to compensate the additional degradation. As a result, the reconstructed image is an enhanced version of the input image. Also, the algorithm presents an attention module to reduce high-intensity areas generated in enhanced images by color channel unbalances and outlier regions. Furthermore, the proposed methodology requires no ground-truth. Besides, only real underwater images were used to train the neural network, and the results indicate the effectiveness of the method in terms of color preservation, color cast reduction, and contrast improvement.
在水下活动中获得的图像受到水的环境特性的影响,如浊度和光衰减。这些现象导致色彩失真、模糊和对比度降低。此外,不规则的环境光分布导致色彩通道不平衡和高强度像素区域。最近的工作与水下图像增强有关,并基于深度学习方法,解决了缺乏生成合成地面真相的配对数据集的问题。在本文中,我们提出了一种基于深度学习的水下图像增强自监督学习方法,该方法不需要配对数据集。提出的方法估计水下图像中存在的退化。此外,自动编码器重建该图像,并使用估计的退化信息对其输出图像进行降级。因此,该策略在训练阶段将输出图像替换为损失函数中的降级版本。这个过程textit{会误导}神经网络去补偿额外的退化。因此,重建图像是输入图像的增强版本。此外,该算法还提出了一个关注模块,以减少彩色通道不平衡和异常区域在增强图像中产生的高强度区域。此外,所提出的方法不需要基础真理。此外,只使用真实的水下图像来训练神经网络,结果表明该方法在保持颜色、减少色偏和提高对比度方面是有效的。
{"title":"Underwater enhancement based on a self-learning strategy and attention mechanism for high-intensity regions","authors":"Claudio D. Mello, Bryan U. Moreira, Paulo J. O. Evald, Paulo L. J. Drews-Jr, Silvia S. Botelho","doi":"10.48550/arXiv.2208.03319","DOIUrl":"https://doi.org/10.48550/arXiv.2208.03319","url":null,"abstract":"Images acquired during underwater activities suffer from environmental properties of the water, such as turbidity and light attenuation. These phenomena cause color distortion, blurring, and contrast reduction. In addition, irregular ambient light distribution causes color channel unbalance and regions with high-intensity pixels. Recent works related to underwater image enhancement, and based on deep learning approaches, tackle the lack of paired datasets generating synthetic ground-truth. In this paper, we present a self-supervised learning methodology for underwater image enhancement based on deep learning that requires no paired datasets. The proposed method estimates the degradation present in underwater images. Besides, an autoencoder reconstructs this image, and its output image is degraded using the estimated degradation information. Therefore, the strategy replaces the output image with the degraded version in the loss function during the training phase. This procedure textit{misleads} the neural network that learns to compensate the additional degradation. As a result, the reconstructed image is an enhanced version of the input image. Also, the algorithm presents an attention module to reduce high-intensity areas generated in enhanced images by color channel unbalances and outlier regions. Furthermore, the proposed methodology requires no ground-truth. Besides, only real underwater images were used to train the neural network, and the results indicate the effectiveness of the method in terms of color preservation, color cast reduction, and contrast improvement.","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"56 1","pages":"264-276"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83928855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
SHREC'22 Track: Sketch-Based 3D Shape Retrieval in the Wild 主题:基于草图的野外三维形状检索
Q4 Computer Science Pub Date : 2022-07-01 DOI: 10.48550/arXiv.2207.04945
Jie Qin, Shuaihang Yuan, Jiaxin Chen, B. Amor, Yi Fang, N. Hoang-Xuan, Chi-Bien Chu, Khoi-Nguyen Nguyen-Ngoc, Thien-Tri Cao, Nhat-Khang Ngô, Tuan-Luc Huynh, Hai-Dang Nguyen, M. Tran, H. Luo, Jianning Wang, Zheng-Wei Zhang, Zihao Xin, Yang Wang, Feng Wang, Yingjie Tang, Haiqin Chen, Yan Wang, Qunying Zhou, Ji Zhang, Hongyu Wang
Sketch-based 3D shape retrieval (SBSR) is an important yet challenging task, which has drawn more and more attention in recent years. Existing approaches address the problem in a restricted setting, without appropriately simulating real application scenarios. To mimic the realistic setting, in this track, we adopt large-scale sketches drawn by amateurs of different levels of drawing skills, as well as a variety of 3D shapes including not only CAD models but also models scanned from real objects. We define two SBSR tasks and construct two benchmarks consisting of more than 46,000 CAD models, 1,700 realistic models, and 145,000 sketches in total. Four teams participated in this track and submitted 15 runs for the two tasks, evaluated by 7 commonly-adopted metrics. We hope that, the benchmarks, the comparative results, and the open-sourced evaluation code will foster future research in this direction among the 3D object retrieval community.
基于草图的三维形状检索(SBSR)是一项重要而又具有挑战性的任务,近年来受到越来越多的关注。现有的方法在受限的环境中解决问题,没有适当地模拟真实的应用场景。为了模拟现实环境,在这条赛道上,我们采用了不同水平的业余爱好者绘制的大型草图,以及各种3D形状,不仅包括CAD模型,还包括从真实物体扫描的模型。我们定义了两个SBSR任务,并构建了两个基准,包括超过46,000个CAD模型,1,700个现实模型和145,000个草图。四个团队参与了这条赛道,并为这两个任务提交了15次运行,由7个普遍采用的指标进行评估。我们希望,基准测试、比较结果和开源的评估代码将在3D对象检索社区中促进这一方向的未来研究。
{"title":"SHREC'22 Track: Sketch-Based 3D Shape Retrieval in the Wild","authors":"Jie Qin, Shuaihang Yuan, Jiaxin Chen, B. Amor, Yi Fang, N. Hoang-Xuan, Chi-Bien Chu, Khoi-Nguyen Nguyen-Ngoc, Thien-Tri Cao, Nhat-Khang Ngô, Tuan-Luc Huynh, Hai-Dang Nguyen, M. Tran, H. Luo, Jianning Wang, Zheng-Wei Zhang, Zihao Xin, Yang Wang, Feng Wang, Yingjie Tang, Haiqin Chen, Yan Wang, Qunying Zhou, Ji Zhang, Hongyu Wang","doi":"10.48550/arXiv.2207.04945","DOIUrl":"https://doi.org/10.48550/arXiv.2207.04945","url":null,"abstract":"Sketch-based 3D shape retrieval (SBSR) is an important yet challenging task, which has drawn more and more attention in recent years. Existing approaches address the problem in a restricted setting, without appropriately simulating real application scenarios. To mimic the realistic setting, in this track, we adopt large-scale sketches drawn by amateurs of different levels of drawing skills, as well as a variety of 3D shapes including not only CAD models but also models scanned from real objects. We define two SBSR tasks and construct two benchmarks consisting of more than 46,000 CAD models, 1,700 realistic models, and 145,000 sketches in total. Four teams participated in this track and submitted 15 runs for the two tasks, evaluated by 7 commonly-adopted metrics. We hope that, the benchmarks, the comparative results, and the open-sourced evaluation code will foster future research in this direction among the 3D object retrieval community.","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"51 1","pages":"104-115"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75572242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A study of deep single sketch-based modeling: View/style invariance, sparsity and latent space disentanglement 基于深度单一草图的建模研究:视图/样式不变性、稀疏性和潜在空间解纠缠
Q4 Computer Science Pub Date : 2022-06-01 DOI: 10.2139/ssrn.3999114
Yue Zhong, Yulia Gryaditskaya, Honggang Zhang, Yi-Zhe Song
{"title":"A study of deep single sketch-based modeling: View/style invariance, sparsity and latent space disentanglement","authors":"Yue Zhong, Yulia Gryaditskaya, Honggang Zhang, Yi-Zhe Song","doi":"10.2139/ssrn.3999114","DOIUrl":"https://doi.org/10.2139/ssrn.3999114","url":null,"abstract":"","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"64 1","pages":"237-247"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85501444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A parallel algorithm for computing Voronoi diagram of a set of spheres using restricted lower envelope approach and topology matching 利用受限下包络法和拓扑匹配计算一组球体Voronoi图的并行算法
Q4 Computer Science Pub Date : 2022-06-01 DOI: 10.2139/ssrn.4095671
M. Mukundan, S. Thayyil, Ramanathan Muthuganapathy
{"title":"A parallel algorithm for computing Voronoi diagram of a set of spheres using restricted lower envelope approach and topology matching","authors":"M. Mukundan, S. Thayyil, Ramanathan Muthuganapathy","doi":"10.2139/ssrn.4095671","DOIUrl":"https://doi.org/10.2139/ssrn.4095671","url":null,"abstract":"","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"1 1","pages":"210-221"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72679985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SHREC 2022: pothole and crack detection in the road pavement using images and RGB-D data SHREC 2022:使用图像和RGB-D数据检测道路路面的坑洼和裂缝
Q4 Computer Science Pub Date : 2022-05-26 DOI: 10.48550/arXiv.2205.13326
E. M. Thompson, A. Ranieri, S. Biasotti, Miguel Chicchón, I. Sipiran, Minh Pham, Thang-Long Nguyen-Ho, Hai-Dang Nguyen, M. Tran
This paper describes the methods submitted for evaluation to the SHREC 2022 track on pothole and crack detection in the road pavement. A total of 7 different runs for the semantic segmentation of the road surface are compared, 6 from the participants plus a baseline method. All methods exploit Deep Learning techniques and their performance is tested using the same environment (i.e.: a single Jupyter notebook). A training set, composed of 3836 semantic segmentation image/mask pairs and 797 RGB-D video clips collected with the latest depth cameras was made available to the participants. The methods are then evaluated on the 496 image/mask pairs in the validation set, on the 504 pairs in the test set and finally on 8 video clips. The analysis of the results is based on quantitative metrics for image segmentation and qualitative analysis of the video clips. The participation and the results show that the scenario is of great interest and that the use of RGB-D data is still challenging in this context.
本文介绍了提交给SHREC 2022轨道评估的道路路面凹坑和裂缝检测方法。总共比较了7种不同的路面语义分割方法,其中6种来自参与者加上基线方法。所有方法都利用深度学习技术,并使用相同的环境(即:单个Jupyter笔记本)对其性能进行测试。由3836个语义分割图像/掩码对和797个RGB-D视频片段组成的训练集,由最新深度相机采集。然后在验证集中的496对图像/掩码对上评估方法,在测试集中的504对上评估方法,最后在8个视频剪辑上评估方法。对结果的分析是基于图像分割的定量指标和视频片段的定性分析。参与和结果表明,该方案非常有趣,并且在这种情况下使用RGB-D数据仍然具有挑战性。
{"title":"SHREC 2022: pothole and crack detection in the road pavement using images and RGB-D data","authors":"E. M. Thompson, A. Ranieri, S. Biasotti, Miguel Chicchón, I. Sipiran, Minh Pham, Thang-Long Nguyen-Ho, Hai-Dang Nguyen, M. Tran","doi":"10.48550/arXiv.2205.13326","DOIUrl":"https://doi.org/10.48550/arXiv.2205.13326","url":null,"abstract":"This paper describes the methods submitted for evaluation to the SHREC 2022 track on pothole and crack detection in the road pavement. A total of 7 different runs for the semantic segmentation of the road surface are compared, 6 from the participants plus a baseline method. All methods exploit Deep Learning techniques and their performance is tested using the same environment (i.e.: a single Jupyter notebook). A training set, composed of 3836 semantic segmentation image/mask pairs and 797 RGB-D video clips collected with the latest depth cameras was made available to the participants. The methods are then evaluated on the 496 image/mask pairs in the validation set, on the 504 pairs in the test set and finally on 8 video clips. The analysis of the results is based on quantitative metrics for image segmentation and qualitative analysis of the video clips. The participation and the results show that the scenario is of great interest and that the use of RGB-D data is still challenging in this context.","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"12 1","pages":"161-171"},"PeriodicalIF":0.0,"publicationDate":"2022-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86071287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Simultaneous 3D dithering of multiple images by curves 同时三维抖动多幅图像的曲线
Q4 Computer Science Pub Date : 2022-05-01 DOI: 10.2139/ssrn.4092899
G. Elber
{"title":"Simultaneous 3D dithering of multiple images by curves","authors":"G. Elber","doi":"10.2139/ssrn.4092899","DOIUrl":"https://doi.org/10.2139/ssrn.4092899","url":null,"abstract":"","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"7 1","pages":"146-152"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83742808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Computer Graphics World
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1