首页 > 最新文献

2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)最新文献

英文 中文
A Practical Review on Medical Image Registration: From Rigid to Deep Learning Based Approaches 医学图像配准的实践综述:从僵化到基于深度学习的方法
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00066
Natan Andrade, F. Faria, F. Cappabianco
The large variety of medical image modalities (e.g. Computed Tomography, Magnetic Resonance Imaging, and Positron Emission Tomography) acquired from the same body region of a patient together with recent advances in computer architectures with faster and larger CPUs and GPUs allows a new, exciting, and unexplored world for image registration area. A precise and accurate registration of images makes possible understanding the etiology of diseases, improving surgery planning and execution, detecting otherwise unnoticed health problem signals, and mapping functionalities of the brain. The goal of this paper is to present a review of the state-of-the-art in medical image registration starting from the preprocessing steps, covering the most popular methodologies of the literature and finish with the more recent advances and perspectives from the application of Deep Learning architectures.
从患者的同一身体区域获得的各种医学图像模式(例如计算机断层扫描、磁共振成像和正电子发射断层扫描),以及计算机体系结构的最新进展,包括更快、更大的cpu和gpu,为图像配准领域提供了一个新的、令人兴奋的、未开发的世界。精确和准确的图像配准使了解疾病的病因、改进手术计划和执行、检测否则被忽视的健康问题信号和绘制大脑功能成为可能。本文的目的是回顾医学图像配准的最新进展,从预处理步骤开始,涵盖文献中最流行的方法,并以深度学习架构应用的最新进展和观点结束。
{"title":"A Practical Review on Medical Image Registration: From Rigid to Deep Learning Based Approaches","authors":"Natan Andrade, F. Faria, F. Cappabianco","doi":"10.1109/SIBGRAPI.2018.00066","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00066","url":null,"abstract":"The large variety of medical image modalities (e.g. Computed Tomography, Magnetic Resonance Imaging, and Positron Emission Tomography) acquired from the same body region of a patient together with recent advances in computer architectures with faster and larger CPUs and GPUs allows a new, exciting, and unexplored world for image registration area. A precise and accurate registration of images makes possible understanding the etiology of diseases, improving surgery planning and execution, detecting otherwise unnoticed health problem signals, and mapping functionalities of the brain. The goal of this paper is to present a review of the state-of-the-art in medical image registration starting from the preprocessing steps, covering the most popular methodologies of the literature and finish with the more recent advances and perspectives from the application of Deep Learning architectures.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115559363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Graph Spectral Filtering for Network Simplification 用于网络简化的图谱滤波
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00051
Markus Diego Dias, Fabiano Petronetto, Paola Valdivia, L. G. Nonato
Visualization is an important tool in the analysis and understanding of networks and their content. However, visualization tools face major challenges when dealing with large networks, mainly due to visual clutter. In this context, network simplification has been a main alternative to handle massive networks, reducing complexity while preserving relevant patterns of the network structure and content. In this paper we propose a methodology that rely on Graph Signal Processing theory to filter multivariate data associated to network nodes, assisting and enhancing network simplification and visualization tasks. The simplification process takes into account both topological and multivariate data associated to network nodes to create a hierarchical representation of the network. The effectiveness of the proposed methodology is assessed through a comprehensive set of quantitative evaluation and comparisons, which gauge the impact of the proposed filtering process in the simplification and visualization tasks.
可视化是分析和理解网络及其内容的重要工具。然而,可视化工具在处理大型网络时面临着主要的挑战,主要是由于视觉混乱。在这种情况下,网络简化已成为处理大规模网络的主要替代方案,在保留网络结构和内容的相关模式的同时降低复杂性。在本文中,我们提出了一种依赖于图信号处理理论的方法来过滤与网络节点相关的多元数据,帮助和增强网络简化和可视化任务。简化过程考虑了与网络节点相关的拓扑和多变量数据,以创建网络的分层表示。通过一套全面的定量评估和比较来评估所提出方法的有效性,这些评估和比较衡量了所提出的过滤过程在简化和可视化任务中的影响。
{"title":"Graph Spectral Filtering for Network Simplification","authors":"Markus Diego Dias, Fabiano Petronetto, Paola Valdivia, L. G. Nonato","doi":"10.1109/SIBGRAPI.2018.00051","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00051","url":null,"abstract":"Visualization is an important tool in the analysis and understanding of networks and their content. However, visualization tools face major challenges when dealing with large networks, mainly due to visual clutter. In this context, network simplification has been a main alternative to handle massive networks, reducing complexity while preserving relevant patterns of the network structure and content. In this paper we propose a methodology that rely on Graph Signal Processing theory to filter multivariate data associated to network nodes, assisting and enhancing network simplification and visualization tasks. The simplification process takes into account both topological and multivariate data associated to network nodes to create a hierarchical representation of the network. The effectiveness of the proposed methodology is assessed through a comprehensive set of quantitative evaluation and comparisons, which gauge the impact of the proposed filtering process in the simplification and visualization tasks.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"35 1-2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114132130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Method for Opinion Classification in Video Combining Facial Expressions and Gestures 一种结合表情和手势的视频意见分类方法
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00011
Airton Gaio Junior, E. Santos
Most of the researches dealing with video-based opinion recognition problems employ the combination of data from three different sources: video, audio and text. As a consequence, they are solutions based on complex and language-dependent models. Besides such complexity, it may be observed that these current solutions attain low performance in practical applications. Focusing on overcoming these drawbacks, this work presents a method for opinion classification that uses only video as data source, more precisely, facial expression and body gesture information are extracted from online videos and combined to lead to higher classification rates. The proposed method uses feature encoding strategies to improve data representation and to facilitate the classification task in order to predict user's opinion with high accuracy and independently of the language used in videos. Experiments were carried out using three public databases and three baselines to test the proposed method. The results of these experiments show that, even performing only visual analysis of the videos, the proposed method achieves 16% higher accuracy and precision rates, when compared to baselines that analyze visual, audio and textual data video. Moreover, it is showed that the proposed method may identify emotions in videos whose language is other than the language used for training.
大多数处理基于视频的意见识别问题的研究采用了三种不同来源的数据的组合:视频、音频和文本。因此,它们是基于复杂且依赖于语言的模型的解决方案。除了这种复杂性之外,可以观察到这些当前的解决方案在实际应用中性能较低。针对这些缺点,本文提出了一种仅以视频为数据源的意见分类方法,更准确地说,从在线视频中提取面部表情和肢体动作信息并进行组合,从而提高了分类率。该方法利用特征编码策略来改进数据表示,方便分类任务,以高精度、独立于视频语言的方式预测用户意见。利用三个公共数据库和三条基线进行了实验,对所提出的方法进行了测试。实验结果表明,即使只对视频进行视觉分析,与分析视频、音频和文本数据的基线相比,该方法的准确率和精密度提高了16%。此外,研究表明,该方法可以识别语言与训练语言不同的视频中的情绪。
{"title":"A Method for Opinion Classification in Video Combining Facial Expressions and Gestures","authors":"Airton Gaio Junior, E. Santos","doi":"10.1109/SIBGRAPI.2018.00011","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00011","url":null,"abstract":"Most of the researches dealing with video-based opinion recognition problems employ the combination of data from three different sources: video, audio and text. As a consequence, they are solutions based on complex and language-dependent models. Besides such complexity, it may be observed that these current solutions attain low performance in practical applications. Focusing on overcoming these drawbacks, this work presents a method for opinion classification that uses only video as data source, more precisely, facial expression and body gesture information are extracted from online videos and combined to lead to higher classification rates. The proposed method uses feature encoding strategies to improve data representation and to facilitate the classification task in order to predict user's opinion with high accuracy and independently of the language used in videos. Experiments were carried out using three public databases and three baselines to test the proposed method. The results of these experiments show that, even performing only visual analysis of the videos, the proposed method achieves 16% higher accuracy and precision rates, when compared to baselines that analyze visual, audio and textual data video. Moreover, it is showed that the proposed method may identify emotions in videos whose language is other than the language used for training.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130166289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Medical Objects Retrieval Approach Using SPHARMs Descriptor and Network Flow as Similarity Measure 基于SPHARMs描述符和网络流的三维医学对象检索方法
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00049
L. Bergamasco, K. Lima, C. Rochitte, Fátima L. S. Nunes
The data processing to obtain useful information is a trending topic in the computing knowledge domain since we have observed a high demand arising from society for efficient techniques to perform this activity. Spherical Harmonics (SPHARMs) have been widely used in the three-dimensional (3D) object processing domain. Harmonic coefficients generated by this mathematical theory are considered a robust source of information about 3D objects. In parallel, Ford-Fulkerson is a classical method in graph theory that solves network flows problems. In this work we demonstrate the potential of using SPHARMs along with the Ford-Fulkerson method, respectively as descriptor and similarity measure. This article also shows how we adapted the later to transform it into a similarity measure. Our approach has been validated by a 3D medical dataset composed by 3D left ventricle surfaces, some of them presenting Congestive Heart Failure (CHF). The results indicated an average precision of 90%. In addition, the execution time was 65% lower than a descriptor previously tested. With the results obtained we can conclude that our approach, mainly the Ford-Fulkerson adaptation proposed, has a great potential to retrieve 3D medical objects.
数据处理以获得有用的信息是计算知识领域的一个趋势话题,因为我们已经观察到社会对执行这一活动的有效技术的高需求。球面谐波(SPHARMs)在三维物体处理领域得到了广泛的应用。由该数学理论产生的谐波系数被认为是三维物体信息的可靠来源。与此同时,Ford-Fulkerson是图论中解决网络流问题的经典方法。在这项工作中,我们展示了使用SPHARMs和Ford-Fulkerson方法的潜力,分别作为描述符和相似性度量。本文还展示了我们如何调整后者,将其转换为相似性度量。我们的方法已经被3D左心室表面组成的3D医学数据集验证,其中一些显示充血性心力衰竭(CHF)。结果表明,平均精密度为90%。此外,执行时间比之前测试的描述符低65%。根据所获得的结果,我们可以得出结论,我们的方法,主要是提出的Ford-Fulkerson适应,在检索3D医疗对象方面具有很大的潜力。
{"title":"3D Medical Objects Retrieval Approach Using SPHARMs Descriptor and Network Flow as Similarity Measure","authors":"L. Bergamasco, K. Lima, C. Rochitte, Fátima L. S. Nunes","doi":"10.1109/SIBGRAPI.2018.00049","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00049","url":null,"abstract":"The data processing to obtain useful information is a trending topic in the computing knowledge domain since we have observed a high demand arising from society for efficient techniques to perform this activity. Spherical Harmonics (SPHARMs) have been widely used in the three-dimensional (3D) object processing domain. Harmonic coefficients generated by this mathematical theory are considered a robust source of information about 3D objects. In parallel, Ford-Fulkerson is a classical method in graph theory that solves network flows problems. In this work we demonstrate the potential of using SPHARMs along with the Ford-Fulkerson method, respectively as descriptor and similarity measure. This article also shows how we adapted the later to transform it into a similarity measure. Our approach has been validated by a 3D medical dataset composed by 3D left ventricle surfaces, some of them presenting Congestive Heart Failure (CHF). The results indicated an average precision of 90%. In addition, the execution time was 65% lower than a descriptor previously tested. With the results obtained we can conclude that our approach, mainly the Ford-Fulkerson adaptation proposed, has a great potential to retrieve 3D medical objects.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134230406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Image-Based Visualization of Classifier Decision Boundaries 基于图像的分类器决策边界可视化
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00052
F. C. M. Rodrigues, R. Hirata, A. Telea
Understanding how a classifier partitions a high-dimensional input space and assigns labels to the parts is an important task in machine learning. Current methods for this task mainly use color-coded sample scatterplots, which do not explicitly show the actual decision boundaries or confusion zones. We propose an image-based technique to improve such visualizations. The method samples the 2D space of a dimensionality-reduction projection and color-code relevant classifier outputs, such as the majority class label, the confusion, and the sample density, to render a dense depiction of the high-dimensional decision boundaries. Our technique is simple to implement, handles any classifier, and has only two simple-to-control free parameters. We demonstrate our proposal on several real-world high-dimensional datasets, classifiers, and two different dimensionality reduction methods.
了解分类器如何划分高维输入空间并为部件分配标签是机器学习中的一项重要任务。目前这项任务的方法主要使用颜色编码的样本散点图,它不能明确显示实际的决策边界或混淆区域。我们提出了一种基于图像的技术来改善这种可视化。该方法对降维投影的二维空间进行采样,并对相关分类器输出进行颜色编码,如多数类标签、混淆和样本密度,以呈现高维决策边界的密集描述。我们的技术很容易实现,可以处理任何分类器,并且只有两个易于控制的自由参数。我们在几个真实世界的高维数据集、分类器和两种不同的降维方法上展示了我们的建议。
{"title":"Image-Based Visualization of Classifier Decision Boundaries","authors":"F. C. M. Rodrigues, R. Hirata, A. Telea","doi":"10.1109/SIBGRAPI.2018.00052","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00052","url":null,"abstract":"Understanding how a classifier partitions a high-dimensional input space and assigns labels to the parts is an important task in machine learning. Current methods for this task mainly use color-coded sample scatterplots, which do not explicitly show the actual decision boundaries or confusion zones. We propose an image-based technique to improve such visualizations. The method samples the 2D space of a dimensionality-reduction projection and color-code relevant classifier outputs, such as the majority class label, the confusion, and the sample density, to render a dense depiction of the high-dimensional decision boundaries. Our technique is simple to implement, handles any classifier, and has only two simple-to-control free parameters. We demonstrate our proposal on several real-world high-dimensional datasets, classifiers, and two different dimensionality reduction methods.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"30 24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126575152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Extracting Visual Encodings from Map Chart Images with Color-Encoded Scalar Values 从带有颜色编码标量值的地图图表图像中提取视觉编码
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00025
Angela Mayhua, Erick Gomez Nieto, Jeffrey Heer, Jorge Poco
Map charts are used in diverse domains to show geographic data (e.g., climate research, oceanography, business analysis, etc.). These charts can be found in news articles, scientific papers, and on the Web. However, many map charts are available only as bitmap images, hindering machine interpretation of the visualized data for indexing and reuse. We propose a pipeline to recover both the visual encodings and underlying data from bitmap images of geographic maps with color-encoded scalar values. We evaluate our results using map images from scientific documents, achieving high accuracy along each step of our proposal. In addition, we present two applications: data extraction and map reprojection to enable improved visual representations of map charts.
地图图表用于不同的领域来显示地理数据(例如,气候研究、海洋学、商业分析等)。这些图表可以在新闻文章、科学论文和网络上找到。然而,许多地图图表只能作为位图图像提供,阻碍了机器对可视化数据的解释,以便索引和重用。我们提出了一个管道,以恢复视觉编码和底层数据从位图图像的地理地图与颜色编码的标量值。我们使用来自科学文献的地图图像来评估我们的结果,在我们建议的每个步骤中都达到了很高的准确性。此外,我们提出了两种应用:数据提取和地图重投影,以改进地图图表的视觉表示。
{"title":"Extracting Visual Encodings from Map Chart Images with Color-Encoded Scalar Values","authors":"Angela Mayhua, Erick Gomez Nieto, Jeffrey Heer, Jorge Poco","doi":"10.1109/SIBGRAPI.2018.00025","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00025","url":null,"abstract":"Map charts are used in diverse domains to show geographic data (e.g., climate research, oceanography, business analysis, etc.). These charts can be found in news articles, scientific papers, and on the Web. However, many map charts are available only as bitmap images, hindering machine interpretation of the visualized data for indexing and reuse. We propose a pipeline to recover both the visual encodings and underlying data from bitmap images of geographic maps with color-encoded scalar values. We evaluate our results using map images from scientific documents, achieving high accuracy along each step of our proposal. In addition, we present two applications: data extraction and map reprojection to enable improved visual representations of map charts.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123905007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Single-Shot Person Re-Identification Combining Similarity Metrics and Support Vectors 结合相似性度量和支持向量的单镜头人物再识别
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00039
Anderson Luis Cavalcanti Sales, R. H. Vareto, W. R. Schwartz, Guillermo Cámara Chávez
Person Re-Identification is all about determining a person's entire course as s/he walks around camera-equipped zones. More precisely, person Re-ID is the problem of matching human identities captured from non-overlapping surveillance cameras. In this work, we propose an approach that learns a new low-dimensional metric space in an attempt to cut down multi-camera matching errors. We represent the training and test samples by concatenating handcrafted features. Then, the method performs a two-step ranking using elementary distance metrics and followed by an ensemble of weighted binary classifiers. We validate our approach on CUHK01 and PRID450s datasets, providing only a sample per class for probe and only a sample for gallery (single-shot). According to the experiments, our method achieves CMC Rank-1 results up to 61.1 and 75.4, following leading literature protocols, for CUHK01 and PRID450s, respectively.
人员重新识别是关于确定一个人在装有摄像头的区域走动时的整个过程。更准确地说,人的重新识别是匹配从不重叠的监控摄像头捕获的人的身份的问题。在这项工作中,我们提出了一种学习新的低维度量空间的方法,试图减少多相机匹配误差。我们通过连接手工制作的特征来表示训练和测试样本。然后,该方法使用基本距离度量执行两步排序,然后是加权二元分类器的集合。我们在CUHK01和prid450数据集上验证了我们的方法,每个类只提供一个样本用于探针,只有一个样本用于画廊(单次拍摄)。根据实验,我们的方法对CUHK01和prid450的CMC Rank-1结果分别达到61.1和75.4,遵循主流文献方案。
{"title":"Single-Shot Person Re-Identification Combining Similarity Metrics and Support Vectors","authors":"Anderson Luis Cavalcanti Sales, R. H. Vareto, W. R. Schwartz, Guillermo Cámara Chávez","doi":"10.1109/SIBGRAPI.2018.00039","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00039","url":null,"abstract":"Person Re-Identification is all about determining a person's entire course as s/he walks around camera-equipped zones. More precisely, person Re-ID is the problem of matching human identities captured from non-overlapping surveillance cameras. In this work, we propose an approach that learns a new low-dimensional metric space in an attempt to cut down multi-camera matching errors. We represent the training and test samples by concatenating handcrafted features. Then, the method performs a two-step ranking using elementary distance metrics and followed by an ensemble of weighted binary classifiers. We validate our approach on CUHK01 and PRID450s datasets, providing only a sample per class for probe and only a sample for gallery (single-shot). According to the experiments, our method achieves CMC Rank-1 results up to 61.1 and 75.4, following leading literature protocols, for CUHK01 and PRID450s, respectively.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115012577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Inverse Projection of Vector Fields 向量场的逆投影
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00050
Paula Ceccon Ribeiro, H. Lopes
Vector fields play an essential role in a large range of scientific applications. They are commonly generated through computer simulations. Such simulations may be a costly process since they usually require an intensive computational time. When researchers want to quantify the uncertainty in such kind of applications, usually an ensemble of vector fields realizations are generated, making the process much more expensive. The main contribution of this paper is to present a new method, based on the inverse projection technique, to quickly and consistently generate 2D vector fields similar to the ones in the ensemble, which after an evaluation of a specialist could enlarge the ensemble in order to better represent the uncertainty. Through the Helmholtz-Hodge Decomposition, we obtain the divergence-free, rotational-free and harmonic components of a vector field. With those components and the original ensemble in hand, it is possible to derive new realizations from their projections into a 2-dimensional space. To do so, we propose the use of an inverse projection technique individually in each component projected space. Results are obtained in real-time, through an interactive interface. A set of multi-method wind forecast realizations were used to demonstrate the results obtained with this approach.
矢量场在广泛的科学应用中起着至关重要的作用。它们通常是通过计算机模拟生成的。这种模拟可能是一个昂贵的过程,因为它们通常需要大量的计算时间。当研究人员想要量化此类应用中的不确定性时,通常会生成矢量场实现的集合,这使得该过程更加昂贵。本文的主要贡献是提出了一种基于逆投影技术的新方法,该方法可以快速一致地生成与集合中相似的二维向量场,经过专家评估后可以扩大集合以更好地表示不确定性。通过Helmholtz-Hodge分解,我们得到了矢量场的无散度分量、无旋转分量和谐波分量。有了这些组件和原始集合,就可以从它们对二维空间的投影中获得新的实现。为此,我们建议在每个分量投影空间中单独使用逆投影技术。通过交互界面实时获得结果。用一组多方法的风预报实现来验证该方法所获得的结果。
{"title":"Inverse Projection of Vector Fields","authors":"Paula Ceccon Ribeiro, H. Lopes","doi":"10.1109/SIBGRAPI.2018.00050","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00050","url":null,"abstract":"Vector fields play an essential role in a large range of scientific applications. They are commonly generated through computer simulations. Such simulations may be a costly process since they usually require an intensive computational time. When researchers want to quantify the uncertainty in such kind of applications, usually an ensemble of vector fields realizations are generated, making the process much more expensive. The main contribution of this paper is to present a new method, based on the inverse projection technique, to quickly and consistently generate 2D vector fields similar to the ones in the ensemble, which after an evaluation of a specialist could enlarge the ensemble in order to better represent the uncertainty. Through the Helmholtz-Hodge Decomposition, we obtain the divergence-free, rotational-free and harmonic components of a vector field. With those components and the original ensemble in hand, it is possible to derive new realizations from their projections into a 2-dimensional space. To do so, we propose the use of an inverse projection technique individually in each component projected space. Results are obtained in real-time, through an interactive interface. A set of multi-method wind forecast realizations were used to demonstrate the results obtained with this approach.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116888001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SIBGRAPI 2018 Foreword
Pub Date : 2018-10-01 DOI: 10.1109/sibgrapi.2018.00005
{"title":"SIBGRAPI 2018 Foreword","authors":"","doi":"10.1109/sibgrapi.2018.00005","DOIUrl":"https://doi.org/10.1109/sibgrapi.2018.00005","url":null,"abstract":"","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125749791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised Representation Learning Using Convolutional and Stacked Auto-Encoders: A Domain and Cross-Domain Feature Space Analysis 使用卷积和堆叠自编码器的无监督表示学习:域和跨域特征空间分析
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00063
G. B. Cavallari, Leo Sampaio Ferraz Ribeiro, M. Ponti
A feature learning task involves training models that are capable of inferring good representations (transformations of the original space) from input data alone. When working with limited or unlabelled data, and also when multiple visual domains are considered, methods that rely on large annotated datasets, such as Convolutional Neural Networks (CNNs), cannot be employed. In this paper we investigate different auto-encoder (AE) architectures, which require no labels, and explore training strategies to learn representations from images. The models are evaluated considering both the reconstruction error of the images and the feature spaces in terms of their discriminative power. We study the role of dense and convolutional layers on the results, as well as the depth and capacity of the networks, since those are shown to affect both the dimensionality reduction and the capability of generalising for different visual domains. Classification results with AE features were as discriminative as pre-trained CNN features. Our findings can be used as guidelines for the design of unsupervised representation learning methods within and across domains.
特征学习任务包括训练模型,这些模型能够仅从输入数据推断出良好的表示(原始空间的转换)。当处理有限或未标记的数据时,以及当考虑多个视觉域时,依赖于大型注释数据集的方法,如卷积神经网络(cnn),不能使用。在本文中,我们研究了不同的不需要标签的自编码器(AE)架构,并探索了从图像中学习表征的训练策略。从图像的重建误差和特征空间的判别能力两方面对模型进行评价。我们研究了密集层和卷积层对结果的作用,以及网络的深度和容量,因为这些被证明会影响降维和对不同视觉域的泛化能力。AE特征的分类结果与预训练的CNN特征具有相同的判别性。我们的发现可以作为设计域内和跨域的无监督表示学习方法的指南。
{"title":"Unsupervised Representation Learning Using Convolutional and Stacked Auto-Encoders: A Domain and Cross-Domain Feature Space Analysis","authors":"G. B. Cavallari, Leo Sampaio Ferraz Ribeiro, M. Ponti","doi":"10.1109/SIBGRAPI.2018.00063","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00063","url":null,"abstract":"A feature learning task involves training models that are capable of inferring good representations (transformations of the original space) from input data alone. When working with limited or unlabelled data, and also when multiple visual domains are considered, methods that rely on large annotated datasets, such as Convolutional Neural Networks (CNNs), cannot be employed. In this paper we investigate different auto-encoder (AE) architectures, which require no labels, and explore training strategies to learn representations from images. The models are evaluated considering both the reconstruction error of the images and the feature spaces in terms of their discriminative power. We study the role of dense and convolutional layers on the results, as well as the depth and capacity of the networks, since those are shown to affect both the dimensionality reduction and the capability of generalising for different visual domains. Classification results with AE features were as discriminative as pre-trained CNN features. Our findings can be used as guidelines for the design of unsupervised representation learning methods within and across domains.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125860763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1