Developing effective visual analytics systems demands care in characterization of domain problems and integration of visualization techniques and computational models. Urban visual analytics has already achieved remarkable success in tackling urban problems and providing fundamental services for smart cities. To promote further academic research and assist the development of industrial urban analytics systems, we comprehensively review urban visual analytics studies from four perspectives. In particular, we identify 8 urban domains and 22 types of popular visualization, analyze 7 types of computational method, and categorize existing systems into 4 types based on their integration of visualization techniques and computational models. We conclude with potential research directions and opportunities.
{"title":"A survey of urban visual analytics: Advances and future directions.","authors":"Zikun Deng, Di Weng, Shuhan Liu, Yuan Tian, Mingliang Xu, Yingcai Wu","doi":"10.1007/s41095-022-0275-7","DOIUrl":"10.1007/s41095-022-0275-7","url":null,"abstract":"<p><p>Developing effective visual analytics systems demands care in characterization of domain problems and integration of visualization techniques and computational models. Urban visual analytics has already achieved remarkable success in tackling urban problems and providing fundamental services for smart cities. To promote further academic research and assist the development of industrial urban analytics systems, we comprehensively review urban visual analytics studies from four perspectives. In particular, we identify 8 urban domains and 22 types of popular visualization, analyze 7 types of computational method, and categorize existing systems into 4 types based on their integration of visualization techniques and computational models. We conclude with potential research directions and opportunities.</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"9 1","pages":"3-39"},"PeriodicalIF":17.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9579670/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40655032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an unsupervised clustering random-forest-based metric for affinity estimation in large and high-dimensional data. The criterion used for node splitting during forest construction can handle rank-deficiency when measuring cluster compactness. The binary forest-based metric is extended to continuous metrics by exploiting both the common traversal path and the smallest shared parent node. The proposed forest-based metric efficiently estimates affinity by passing down data pairs in the forest using a limited number of decision trees. A pseudo-leaf-splitting (PLS) algorithm is introduced to account for spatial relationships, which regularizes affinity measures and overcomes inconsistent leaf assign-ments. The random-forest-based metric with PLS facilitates the establishment of consistent and point-wise correspondences. The proposed method has been applied to automatic phrase recognition using color and depth videos and point-wise correspondence. Extensive experiments demonstrate the effectiveness of the proposed method in affinity estimation in a comparison with the state-of-the-art.
{"title":"Unsupervised random forest for affinity estimation.","authors":"Yunai Yi, Diya Sun, Peixin Li, Tae-Kyun Kim, Tianmin Xu, Yuru Pei","doi":"10.1007/s41095-021-0241-9","DOIUrl":"10.1007/s41095-021-0241-9","url":null,"abstract":"<p><p>This paper presents an unsupervised clustering random-forest-based metric for affinity estimation in large and high-dimensional data. The criterion used for node splitting during forest construction can handle rank-deficiency when measuring cluster compactness. The binary forest-based metric is extended to continuous metrics by exploiting both the common traversal path and the smallest shared parent node. The proposed forest-based metric efficiently estimates affinity by passing down data pairs in the forest using a limited number of decision trees. A pseudo-leaf-splitting (PLS) algorithm is introduced to account for spatial relationships, which regularizes affinity measures and overcomes inconsistent leaf assign-ments. The random-forest-based metric with PLS facilitates the establishment of consistent and point-wise correspondences. The proposed method has been applied to automatic phrase recognition using color and depth videos and point-wise correspondence. Extensive experiments demonstrate the effectiveness of the proposed method in affinity estimation in a comparison with the state-of-the-art.</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"8 2","pages":"257-272"},"PeriodicalIF":6.9,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8645415/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39720010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01Epub Date: 2021-10-27DOI: 10.1007/s41095-021-0231-y
Yu He, Guo-Dong Zhao, Song-Hai Zhang
Stable label movement and smooth label trajectory are critical for effective information understanding. Sudden label changes cannot be avoided by whatever forced directed methods due to the unreliability of resultant force or global optimization methods due to the complex trade-off on the different aspects. To solve this problem, we proposed a hybrid optimization method by taking advantages of the merits of both approaches. We first detect the spatial-temporal intersection regions from whole trajectories of the features, and initialize the layout by optimization in decreasing order by the number of the involved features. The label movements between the spatial-temporal intersection regions are determined by force directed methods. To cope with some features with high speed relative to neighbors, we introduced a force from future, called temporal force, so that the labels of related features can elude ahead of time and retain smooth movements. We also proposed a strategy by optimizing the label layout to predict the trajectories of features so that such global optimization method can be applied to streaming data.
Electronic supplementary material: Supplementary material is available in the online version of this article at 10.1007/s41095-021-0231-y.
{"title":"Smoothness preserving layout for dynamic labels by hybrid optimization.","authors":"Yu He, Guo-Dong Zhao, Song-Hai Zhang","doi":"10.1007/s41095-021-0231-y","DOIUrl":"https://doi.org/10.1007/s41095-021-0231-y","url":null,"abstract":"<p><p>Stable label movement and smooth label trajectory are critical for effective information understanding. Sudden label changes cannot be avoided by whatever forced directed methods due to the unreliability of resultant force or global optimization methods due to the complex trade-off on the different aspects. To solve this problem, we proposed a hybrid optimization method by taking advantages of the merits of both approaches. We first detect the spatial-temporal intersection regions from whole trajectories of the features, and initialize the layout by optimization in decreasing order by the number of the involved features. The label movements between the spatial-temporal intersection regions are determined by force directed methods. To cope with some features with high speed relative to neighbors, we introduced a force from future, called temporal force, so that the labels of related features can elude ahead of time and retain smooth movements. We also proposed a strategy by optimizing the label layout to predict the trajectories of features so that such global optimization method can be applied to streaming data.</p><p><strong>Electronic supplementary material: </strong>Supplementary material is available in the online version of this article at 10.1007/s41095-021-0231-y.</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"8 1","pages":"149-163"},"PeriodicalIF":6.9,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8549603/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39579548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01Epub Date: 2021-01-07DOI: 10.1007/s41095-020-0199-z
Tao Zhou, Deng-Ping Fan, Ming-Ming Cheng, Jianbing Shen, Ling Shao
Salient object detection, which simulates human visual perception in locating the most significant object(s) in a scene, has been widely applied to various computer vision tasks. Now, the advent of depth sensors means that depth maps can easily be captured; this additional spatial information can boost the performance of salient object detection. Although various RGB-D based salient object detection models with promising performance have been proposed over the past several years, an in-depth understanding of these models and the challenges in this field remains lacking. In this paper, we provide a comprehensive survey of RGB-D based salient object detection models from various perspectives, and review related benchmark datasets in detail. Further, as light fields can also provide depth maps, we review salient object detection models and popular benchmark datasets from this domain too. Moreover, to investigate the ability of existing models to detect salient objects, we have carried out a comprehensive attribute-based evaluation of several representative RGB-D based salient object detection models. Finally, we discuss several challenges and open directions of RGB-D based salient object detection for future research. All collected models, benchmark datasets, datasets constructed for attribute-based evaluation, and related code are publicly available at https://github.com/taozh2017/RGBD-SODsurvey.
{"title":"RGB-D salient object detection: A survey.","authors":"Tao Zhou, Deng-Ping Fan, Ming-Ming Cheng, Jianbing Shen, Ling Shao","doi":"10.1007/s41095-020-0199-z","DOIUrl":"10.1007/s41095-020-0199-z","url":null,"abstract":"<p><p>Salient object detection, which simulates human visual perception in locating the most significant object(s) in a scene, has been widely applied to various computer vision tasks. Now, the advent of depth sensors means that depth maps can easily be captured; this additional spatial information can boost the performance of salient object detection. Although various RGB-D based salient object detection models with promising performance have been proposed over the past several years, an in-depth understanding of these models and the challenges in this field remains lacking. In this paper, we provide a comprehensive survey of RGB-D based salient object detection models from various perspectives, and review related benchmark datasets in detail. Further, as light fields can also provide depth maps, we review salient object detection models and popular benchmark datasets from this domain too. Moreover, to investigate the ability of existing models to detect salient objects, we have carried out a comprehensive attribute-based evaluation of several representative RGB-D based salient object detection models. Finally, we discuss several challenges and open directions of RGB-D based salient object detection for future research. All collected models, benchmark datasets, datasets constructed for attribute-based evaluation, and related code are publicly available at https://github.com/taozh2017/RGBD-SODsurvey.</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"7 1","pages":"37-69"},"PeriodicalIF":17.3,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7788385/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10683664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-01Epub Date: 2020-11-07DOI: 10.1007/s41095-020-0197-1
Or Patashnik, Min Lu, Amit H Bermano, Daniel Cohen-Or
Visualizing high-dimensional data on a 2D canvas is generally challenging. It becomes significantly more difficult when multiple time-steps are to be presented, as the visual clutter quickly increases. Moreover, the challenge to perceive the significant temporal evolution is even greater. In this paper, we present a method to plot temporal high-dimensional data in a static scatterplot; it uses the established PCA technique to project data from multiple time-steps. The key idea is to extend each individual displacement prior to applying PCA, so as to skew the projection process, and to set a projection plane that balances the directions of temporal change and spatial variance. We present numerous examples and various visual cues to highlight the data trajectories, and demonstrate the effectiveness of the method for visualizing temporal data.
{"title":"Temporal scatterplots.","authors":"Or Patashnik, Min Lu, Amit H Bermano, Daniel Cohen-Or","doi":"10.1007/s41095-020-0197-1","DOIUrl":"https://doi.org/10.1007/s41095-020-0197-1","url":null,"abstract":"<p><p>Visualizing high-dimensional data on a 2D canvas is generally challenging. It becomes significantly more difficult when multiple time-steps are to be presented, as the visual clutter quickly increases. Moreover, the challenge to perceive the significant temporal evolution is even greater. In this paper, we present a method to plot temporal high-dimensional data in a static scatterplot; it uses the established PCA technique to project data from multiple time-steps. The key idea is to extend each individual displacement prior to applying PCA, so as to skew the projection process, and to set a projection plane that balances the directions of temporal change and spatial variance. We present numerous examples and various visual cues to highlight the data trajectories, and demonstrate the effectiveness of the method for visualizing temporal data.</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":"6 4","pages":"385-400"},"PeriodicalIF":6.9,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s41095-020-0197-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38604606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}