Josef Stumpfegger, Kevin Höhlein, George E. Craig, R. Westermann
{"title":"GPU accelerated scalable parallel coordinates plots","authors":"Josef Stumpfegger, Kevin Höhlein, George E. Craig, R. Westermann","doi":"10.2139/ssrn.4188415","DOIUrl":"https://doi.org/10.2139/ssrn.4188415","url":null,"abstract":"","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"115 1","pages":"111-120"},"PeriodicalIF":0.0,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79089387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-19DOI: 10.48550/arXiv.2209.09344
Ariel Kwiatkowski, Vicky S. Kalogeiton, Julien Pettr'e, Marie-Paule Cani
Simulating trajectories of virtual crowds is a commonly encountered task in Computer Graphics. Several recent works have applied Reinforcement Learning methods to animate virtual agents, however they often make different design choices when it comes to the fundamental simulation setup. Each of these choices comes with a reasonable justification for its use, so it is not obvious what is their real impact, and how they affect the results. In this work, we analyze some of these arbitrary choices in terms of their impact on the learning performance, as well as the quality of the resulting simulation measured in terms of the energy efficiency. We perform a theoretical analysis of the properties of the reward function design, and empirically evaluate the impact of using certain observation and action spaces on a variety of scenarios, with the reward function and energy usage as metrics. We show that directly using the neighboring agents' information as observation generally outperforms the more widely used raycasting. Similarly, using nonholonomic controls with egocentric observations tends to produce more efficient behaviors than holonomic controls with absolute observations. Each of these choices has a significant, and potentially nontrivial impact on the results, and so researchers should be mindful about choosing and reporting them in their work.
{"title":"Understanding reinforcement learned crowds","authors":"Ariel Kwiatkowski, Vicky S. Kalogeiton, Julien Pettr'e, Marie-Paule Cani","doi":"10.48550/arXiv.2209.09344","DOIUrl":"https://doi.org/10.48550/arXiv.2209.09344","url":null,"abstract":"Simulating trajectories of virtual crowds is a commonly encountered task in Computer Graphics. Several recent works have applied Reinforcement Learning methods to animate virtual agents, however they often make different design choices when it comes to the fundamental simulation setup. Each of these choices comes with a reasonable justification for its use, so it is not obvious what is their real impact, and how they affect the results. In this work, we analyze some of these arbitrary choices in terms of their impact on the learning performance, as well as the quality of the resulting simulation measured in terms of the energy efficiency. We perform a theoretical analysis of the properties of the reward function design, and empirically evaluate the impact of using certain observation and action spaces on a variety of scenarios, with the reward function and energy usage as metrics. We show that directly using the neighboring agents' information as observation generally outperforms the more widely used raycasting. Similarly, using nonholonomic controls with egocentric observations tends to produce more efficient behaviors than holonomic controls with absolute observations. Each of these choices has a significant, and potentially nontrivial impact on the results, and so researchers should be mindful about choosing and reporting them in their work.","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"37 1","pages":"28-37"},"PeriodicalIF":0.0,"publicationDate":"2022-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90233882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.48550/arXiv.2208.03319
Claudio D. Mello, Bryan U. Moreira, Paulo J. O. Evald, Paulo L. J. Drews-Jr, Silvia S. Botelho
Images acquired during underwater activities suffer from environmental properties of the water, such as turbidity and light attenuation. These phenomena cause color distortion, blurring, and contrast reduction. In addition, irregular ambient light distribution causes color channel unbalance and regions with high-intensity pixels. Recent works related to underwater image enhancement, and based on deep learning approaches, tackle the lack of paired datasets generating synthetic ground-truth. In this paper, we present a self-supervised learning methodology for underwater image enhancement based on deep learning that requires no paired datasets. The proposed method estimates the degradation present in underwater images. Besides, an autoencoder reconstructs this image, and its output image is degraded using the estimated degradation information. Therefore, the strategy replaces the output image with the degraded version in the loss function during the training phase. This procedure textit{misleads} the neural network that learns to compensate the additional degradation. As a result, the reconstructed image is an enhanced version of the input image. Also, the algorithm presents an attention module to reduce high-intensity areas generated in enhanced images by color channel unbalances and outlier regions. Furthermore, the proposed methodology requires no ground-truth. Besides, only real underwater images were used to train the neural network, and the results indicate the effectiveness of the method in terms of color preservation, color cast reduction, and contrast improvement.
{"title":"Underwater enhancement based on a self-learning strategy and attention mechanism for high-intensity regions","authors":"Claudio D. Mello, Bryan U. Moreira, Paulo J. O. Evald, Paulo L. J. Drews-Jr, Silvia S. Botelho","doi":"10.48550/arXiv.2208.03319","DOIUrl":"https://doi.org/10.48550/arXiv.2208.03319","url":null,"abstract":"Images acquired during underwater activities suffer from environmental properties of the water, such as turbidity and light attenuation. These phenomena cause color distortion, blurring, and contrast reduction. In addition, irregular ambient light distribution causes color channel unbalance and regions with high-intensity pixels. Recent works related to underwater image enhancement, and based on deep learning approaches, tackle the lack of paired datasets generating synthetic ground-truth. In this paper, we present a self-supervised learning methodology for underwater image enhancement based on deep learning that requires no paired datasets. The proposed method estimates the degradation present in underwater images. Besides, an autoencoder reconstructs this image, and its output image is degraded using the estimated degradation information. Therefore, the strategy replaces the output image with the degraded version in the loss function during the training phase. This procedure textit{misleads} the neural network that learns to compensate the additional degradation. As a result, the reconstructed image is an enhanced version of the input image. Also, the algorithm presents an attention module to reduce high-intensity areas generated in enhanced images by color channel unbalances and outlier regions. Furthermore, the proposed methodology requires no ground-truth. Besides, only real underwater images were used to train the neural network, and the results indicate the effectiveness of the method in terms of color preservation, color cast reduction, and contrast improvement.","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"56 1","pages":"264-276"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83928855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.48550/arXiv.2207.04945
Jie Qin, Shuaihang Yuan, Jiaxin Chen, B. Amor, Yi Fang, N. Hoang-Xuan, Chi-Bien Chu, Khoi-Nguyen Nguyen-Ngoc, Thien-Tri Cao, Nhat-Khang Ngô, Tuan-Luc Huynh, Hai-Dang Nguyen, M. Tran, H. Luo, Jianning Wang, Zheng-Wei Zhang, Zihao Xin, Yang Wang, Feng Wang, Yingjie Tang, Haiqin Chen, Yan Wang, Qunying Zhou, Ji Zhang, Hongyu Wang
Sketch-based 3D shape retrieval (SBSR) is an important yet challenging task, which has drawn more and more attention in recent years. Existing approaches address the problem in a restricted setting, without appropriately simulating real application scenarios. To mimic the realistic setting, in this track, we adopt large-scale sketches drawn by amateurs of different levels of drawing skills, as well as a variety of 3D shapes including not only CAD models but also models scanned from real objects. We define two SBSR tasks and construct two benchmarks consisting of more than 46,000 CAD models, 1,700 realistic models, and 145,000 sketches in total. Four teams participated in this track and submitted 15 runs for the two tasks, evaluated by 7 commonly-adopted metrics. We hope that, the benchmarks, the comparative results, and the open-sourced evaluation code will foster future research in this direction among the 3D object retrieval community.
{"title":"SHREC'22 Track: Sketch-Based 3D Shape Retrieval in the Wild","authors":"Jie Qin, Shuaihang Yuan, Jiaxin Chen, B. Amor, Yi Fang, N. Hoang-Xuan, Chi-Bien Chu, Khoi-Nguyen Nguyen-Ngoc, Thien-Tri Cao, Nhat-Khang Ngô, Tuan-Luc Huynh, Hai-Dang Nguyen, M. Tran, H. Luo, Jianning Wang, Zheng-Wei Zhang, Zihao Xin, Yang Wang, Feng Wang, Yingjie Tang, Haiqin Chen, Yan Wang, Qunying Zhou, Ji Zhang, Hongyu Wang","doi":"10.48550/arXiv.2207.04945","DOIUrl":"https://doi.org/10.48550/arXiv.2207.04945","url":null,"abstract":"Sketch-based 3D shape retrieval (SBSR) is an important yet challenging task, which has drawn more and more attention in recent years. Existing approaches address the problem in a restricted setting, without appropriately simulating real application scenarios. To mimic the realistic setting, in this track, we adopt large-scale sketches drawn by amateurs of different levels of drawing skills, as well as a variety of 3D shapes including not only CAD models but also models scanned from real objects. We define two SBSR tasks and construct two benchmarks consisting of more than 46,000 CAD models, 1,700 realistic models, and 145,000 sketches in total. Four teams participated in this track and submitted 15 runs for the two tasks, evaluated by 7 commonly-adopted metrics. We hope that, the benchmarks, the comparative results, and the open-sourced evaluation code will foster future research in this direction among the 3D object retrieval community.","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"51 1","pages":"104-115"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75572242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yue Zhong, Yulia Gryaditskaya, Honggang Zhang, Yi-Zhe Song
{"title":"A study of deep single sketch-based modeling: View/style invariance, sparsity and latent space disentanglement","authors":"Yue Zhong, Yulia Gryaditskaya, Honggang Zhang, Yi-Zhe Song","doi":"10.2139/ssrn.3999114","DOIUrl":"https://doi.org/10.2139/ssrn.3999114","url":null,"abstract":"","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"64 1","pages":"237-247"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85501444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Mukundan, S. Thayyil, Ramanathan Muthuganapathy
{"title":"A parallel algorithm for computing Voronoi diagram of a set of spheres using restricted lower envelope approach and topology matching","authors":"M. Mukundan, S. Thayyil, Ramanathan Muthuganapathy","doi":"10.2139/ssrn.4095671","DOIUrl":"https://doi.org/10.2139/ssrn.4095671","url":null,"abstract":"","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"1 1","pages":"210-221"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72679985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-26DOI: 10.48550/arXiv.2205.13326
E. M. Thompson, A. Ranieri, S. Biasotti, Miguel Chicchón, I. Sipiran, Minh Pham, Thang-Long Nguyen-Ho, Hai-Dang Nguyen, M. Tran
This paper describes the methods submitted for evaluation to the SHREC 2022 track on pothole and crack detection in the road pavement. A total of 7 different runs for the semantic segmentation of the road surface are compared, 6 from the participants plus a baseline method. All methods exploit Deep Learning techniques and their performance is tested using the same environment (i.e.: a single Jupyter notebook). A training set, composed of 3836 semantic segmentation image/mask pairs and 797 RGB-D video clips collected with the latest depth cameras was made available to the participants. The methods are then evaluated on the 496 image/mask pairs in the validation set, on the 504 pairs in the test set and finally on 8 video clips. The analysis of the results is based on quantitative metrics for image segmentation and qualitative analysis of the video clips. The participation and the results show that the scenario is of great interest and that the use of RGB-D data is still challenging in this context.
{"title":"SHREC 2022: pothole and crack detection in the road pavement using images and RGB-D data","authors":"E. M. Thompson, A. Ranieri, S. Biasotti, Miguel Chicchón, I. Sipiran, Minh Pham, Thang-Long Nguyen-Ho, Hai-Dang Nguyen, M. Tran","doi":"10.48550/arXiv.2205.13326","DOIUrl":"https://doi.org/10.48550/arXiv.2205.13326","url":null,"abstract":"This paper describes the methods submitted for evaluation to the SHREC 2022 track on pothole and crack detection in the road pavement. A total of 7 different runs for the semantic segmentation of the road surface are compared, 6 from the participants plus a baseline method. All methods exploit Deep Learning techniques and their performance is tested using the same environment (i.e.: a single Jupyter notebook). A training set, composed of 3836 semantic segmentation image/mask pairs and 797 RGB-D video clips collected with the latest depth cameras was made available to the participants. The methods are then evaluated on the 496 image/mask pairs in the validation set, on the 504 pairs in the test set and finally on 8 video clips. The analysis of the results is based on quantitative metrics for image segmentation and qualitative analysis of the video clips. The participation and the results show that the scenario is of great interest and that the use of RGB-D data is still challenging in this context.","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"12 1","pages":"161-171"},"PeriodicalIF":0.0,"publicationDate":"2022-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86071287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simultaneous 3D dithering of multiple images by curves","authors":"G. Elber","doi":"10.2139/ssrn.4092899","DOIUrl":"https://doi.org/10.2139/ssrn.4092899","url":null,"abstract":"","PeriodicalId":51003,"journal":{"name":"Computer Graphics World","volume":"7 1","pages":"146-152"},"PeriodicalIF":0.0,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83742808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}