首页 > 最新文献

2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)最新文献

英文 中文
SIBGRAPI 2018 Program Committee SIBGRAPI 2018项目委员会
Pub Date : 2018-10-01 DOI: 10.1109/sibgrapi.2018.00006
Abel Gomes, A. Barbosa, Adriano Veloso, Afonso Paiva, Alexandre Chapiro, Alexandre Falcao, Andrew Nealen, A. Lacerda, António Coelho, Aristófanes Correa
Abel Gomes, University of Beira Interior Adín Ramírez Rivera, Unicamp Adriano Barbosa, UFGD Adriano Veloso, UFMG Afonso Paiva, ICMC-USP Alessandro Koerich, ETS-Montreal Alex Laier, UFF Alexandre Chapiro, Dolby Laboratories Alexandre Falcao, IC-UNICAMP Alexandre Zaghetto, University of Brasília Aline Paes, Institute of Computing / Universidade Federal Fluminense Alper Yilmaz, Ohio State University, Ohio Amilcar Soares Junior, Dalhousie University Ana Serrano, Universidad de Zaragoza Anderson Maciel, UFRGS André Backes, Universidade Federal de Uberlândia André Saúde, UFLA Andrew Nealen, USC Anisio Lacerda, CEFET-MG Antonio Nazare, Federal University of Minas Gerais Antonio Vieira, UNIMONTES António Coelho, FEUP/INESC TEC Aparecido Marana, UNESP Aristófanes Correa, UFMA Azael Sousa, Unicamp Bernardo Henz, UFRGS / IFFar Bruno Espinoza, UnB Cai Minjie, University of Tokyo Camilo Dorea, University of Brasilia Carla Pagliari, Inistituto Militar de Engenharia Carlos Santos, UFABC Carlos Thomaz, FEI Christian Pagot, UFPB Claudio Esperança, UFRJ Claudio Jung, UFRGS Creto Vidal, UFC Cristina Vasconcelos, UFF Cunjian Chen, Michigan State University Daniel Pedronette, UNESP Danilo Coimbra, Federal University of Bahia David Menotti, Federal University of Paraná Dibio Borges, UnB Diogo Garcia, University of Brasilia, Brazil Edilson de Aguiar, UFES
阿贝尔·戈麦斯,University of Adín贝拉内陆ramirez Rivera,哈德良Unicamp Barbosa,哈德良UFGD Veloso UFMG Afonso Paiva、亚历山大ICMC-USP Koerich ETS-Montreal亚历克斯·亚历山大·Chapiro Laier,唷,杜比实验室Alexandre Falcao,亚历山大·Zaghetto IC-UNICAMP Brasília大学seap艾琳,计算大学/研究所(Fluminense伊尔马兹Alper,俄亥俄州州立大学,俄亥俄州被苏亚雷斯初级,Dalhousie大学安娜·塞拉诺,萨拉戈萨大学安德森Maciel联邦大学安德烈Backes marostica de Uberlândia andre沙特UFLA Andrew Nealen、南加州大学Anisio Lacerda CEFET-MG安东尼奥·亚历,若昂·贝尔纳多·维埃拉(Federal University of Minas Gerais安东尼奥·安东尼奥·科埃略,FEUP UNIMONTES /过渡时期行政委员会,Aparecido Marana, UNESP Aristófanes科雷亚,UFMA Unicamp索萨Azael Bernardo Henz, marostica /布鲁诺·Espinoza IFFar UnB cal Minjie,东京大学米洛Dorea,巴西利亚大学卡拉年龄段,男性Inistituto services publics de Engenharia Carlos Santos,卡洛斯·肯尼UFABC eif UFPB Christian Pagot克劳迪奥·Esperança, UFRJ克劳迪奥·荣格,marostica维达尔,UFC在克里斯蒂娜Vasconcelos唷Cunjian陈,密歇根州州立大学Daniel Pedronette UNESP达尼洛科因布拉、巴伊亚联邦大学(Federal University of Paran David MenottiáUnB Dibio博尔赫斯(jorge luis Diogo加西亚,巴西利亚,巴西大学Edilson de亚尔,UFES
{"title":"SIBGRAPI 2018 Program Committee","authors":"Abel Gomes, A. Barbosa, Adriano Veloso, Afonso Paiva, Alexandre Chapiro, Alexandre Falcao, Andrew Nealen, A. Lacerda, António Coelho, Aristófanes Correa","doi":"10.1109/sibgrapi.2018.00006","DOIUrl":"https://doi.org/10.1109/sibgrapi.2018.00006","url":null,"abstract":"Abel Gomes, University of Beira Interior Adín Ramírez Rivera, Unicamp Adriano Barbosa, UFGD Adriano Veloso, UFMG Afonso Paiva, ICMC-USP Alessandro Koerich, ETS-Montreal Alex Laier, UFF Alexandre Chapiro, Dolby Laboratories Alexandre Falcao, IC-UNICAMP Alexandre Zaghetto, University of Brasília Aline Paes, Institute of Computing / Universidade Federal Fluminense Alper Yilmaz, Ohio State University, Ohio Amilcar Soares Junior, Dalhousie University Ana Serrano, Universidad de Zaragoza Anderson Maciel, UFRGS André Backes, Universidade Federal de Uberlândia André Saúde, UFLA Andrew Nealen, USC Anisio Lacerda, CEFET-MG Antonio Nazare, Federal University of Minas Gerais Antonio Vieira, UNIMONTES António Coelho, FEUP/INESC TEC Aparecido Marana, UNESP Aristófanes Correa, UFMA Azael Sousa, Unicamp Bernardo Henz, UFRGS / IFFar Bruno Espinoza, UnB Cai Minjie, University of Tokyo Camilo Dorea, University of Brasilia Carla Pagliari, Inistituto Militar de Engenharia Carlos Santos, UFABC Carlos Thomaz, FEI Christian Pagot, UFPB Claudio Esperança, UFRJ Claudio Jung, UFRGS Creto Vidal, UFC Cristina Vasconcelos, UFF Cunjian Chen, Michigan State University Daniel Pedronette, UNESP Danilo Coimbra, Federal University of Bahia David Menotti, Federal University of Paraná Dibio Borges, UnB Diogo Garcia, University of Brasilia, Brazil Edilson de Aguiar, UFES","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125800427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoupling Expressiveness and Body-Mechanics in Human Motion 人体运动中的解耦、表达与身体力学
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00035
Gustavo Eggert Boehs, M. Vieira, Clovis Geyer Pereira
Modern motion capturing systems can accurately store human motion with high precision. Editing this kind of data is troublesome, due to the amount and complexity of data. In this paper, we present a method for decoupling the aspects of human motion that are strictly related to locomotion and balance, from other movements that may convey expressiveness and intentionality. We then demonstrate how this decoupling can be useful in creating variations of the original motion, or in mixing different actions together.
现代动作捕捉系统可以准确、高精度地存储人体动作。由于数据的数量和复杂性,编辑这类数据很麻烦。在本文中,我们提出了一种方法,将人类运动中与运动和平衡严格相关的方面与其他可能传达表现力和意向性的运动分离开来。然后,我们演示了这种解耦如何在创建原始运动的变化中有用,或者在混合不同的动作在一起。
{"title":"Decoupling Expressiveness and Body-Mechanics in Human Motion","authors":"Gustavo Eggert Boehs, M. Vieira, Clovis Geyer Pereira","doi":"10.1109/SIBGRAPI.2018.00035","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00035","url":null,"abstract":"Modern motion capturing systems can accurately store human motion with high precision. Editing this kind of data is troublesome, due to the amount and complexity of data. In this paper, we present a method for decoupling the aspects of human motion that are strictly related to locomotion and balance, from other movements that may convey expressiveness and intentionality. We then demonstrate how this decoupling can be useful in creating variations of the original motion, or in mixing different actions together.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133754097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Delaunay Triangulation Data Augmentation Guided by Visual Analytics for Deep Learning 基于深度学习的视觉分析的Delaunay三角数据增强
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00056
A. Peixinho, B. C. Benato, L. G. Nonato, A. Falcão
It is well known that image classification problems can be effectively solved by Convolutional Neural Networks (CNNs). However, the number of supervised training examples from all categories must be high enough to avoid model overfitting. In this case, two key alternatives are usually presented (a) the generation of artificial examples, known as data augmentation, and (b) reusing a CNN previously trained over a large supervised training set from another image classification problem — a strategy known as transfer learning. Deep learning approaches have rarely exploited the superior ability of humans for cognitive tasks during the machine learning loop. We advocate that the expert intervention through visual analytics can improve machine learning. In this work, we demonstrate this claim by proposing a data augmentation framework based on Encoder-Decoder Neural Networks (EDNNs) and visual analytics for the design of more effective CNN-based image classifiers. An EDNN is initially trained such that its encoder extracts a feature vector from each training image. These samples are projected from the encoder feature space on to a 2D coordinate space. The expert includes points to the projection space and the feature vectors of the new samples are obtained on the original feature space by interpolation. The decoder generates artificial images from the feature vectors of the new samples and the augmented training set is used to improve the CNN-based classifier. We evaluate methods for the proposed framework and demonstrate its advantages using data from a real problem as case study — the diagnosis of helminth eggs in humans. We also show that transfer learning and data augmentation by affine transformations can further improve the results.
众所周知,卷积神经网络(cnn)可以有效地解决图像分类问题。然而,来自所有类别的监督训练样例的数量必须足够高,以避免模型过拟合。在这种情况下,通常会提出两个关键的替代方案:(a)生成人工示例,称为数据增强;(b)重用以前在另一个图像分类问题的大型监督训练集上训练过的CNN——一种称为迁移学习的策略。在机器学习循环中,深度学习方法很少利用人类在认知任务上的优越能力。我们主张通过可视化分析的专家干预可以改善机器学习。在这项工作中,我们通过提出基于编码器-解码器神经网络(ednn)和视觉分析的数据增强框架来证明这一说法,用于设计更有效的基于cnn的图像分类器。对EDNN进行初始训练,使其编码器从每个训练图像中提取特征向量。这些样本从编码器特征空间投影到二维坐标空间。专家将点加入到投影空间中,在原始特征空间上插值得到新样本的特征向量。解码器从新样本的特征向量生成人工图像,并使用增强训练集来改进基于cnn的分类器。我们评估了所提出的框架的方法,并使用来自真实问题的数据作为案例研究-人类寄生虫卵的诊断来证明其优势。我们还表明,迁移学习和仿射变换的数据增强可以进一步改善结果。
{"title":"Delaunay Triangulation Data Augmentation Guided by Visual Analytics for Deep Learning","authors":"A. Peixinho, B. C. Benato, L. G. Nonato, A. Falcão","doi":"10.1109/SIBGRAPI.2018.00056","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00056","url":null,"abstract":"It is well known that image classification problems can be effectively solved by Convolutional Neural Networks (CNNs). However, the number of supervised training examples from all categories must be high enough to avoid model overfitting. In this case, two key alternatives are usually presented (a) the generation of artificial examples, known as data augmentation, and (b) reusing a CNN previously trained over a large supervised training set from another image classification problem — a strategy known as transfer learning. Deep learning approaches have rarely exploited the superior ability of humans for cognitive tasks during the machine learning loop. We advocate that the expert intervention through visual analytics can improve machine learning. In this work, we demonstrate this claim by proposing a data augmentation framework based on Encoder-Decoder Neural Networks (EDNNs) and visual analytics for the design of more effective CNN-based image classifiers. An EDNN is initially trained such that its encoder extracts a feature vector from each training image. These samples are projected from the encoder feature space on to a 2D coordinate space. The expert includes points to the projection space and the feature vectors of the new samples are obtained on the original feature space by interpolation. The decoder generates artificial images from the feature vectors of the new samples and the augmented training set is used to improve the CNN-based classifier. We evaluate methods for the proposed framework and demonstrate its advantages using data from a real problem as case study — the diagnosis of helminth eggs in humans. We also show that transfer learning and data augmentation by affine transformations can further improve the results.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125107914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Factors Influencing the Perception of Realism in Synthetic Facial Expressions 影响合成面部表情真实感感知的因素
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00045
R. L. Testa, Ariane Machado-Lima, Fátima L. S. Nunes
One way to synthesize facial expressions is to change an image to represent the desired emotion and it is useful in entertainment, diagnostic and psychiatric disorders therapy applications. Despite several existing approaches, there is little discussion about factors that contribute or hinder the perception of realism in synthetic facial expressions images. After presenting an approach for facial expressions synthesis through the deformation of facial features, this paper provides an evaluation by 155 volunteers regarding the realism of synthesized images. The proposed facial expression synthesis aims to generate new images using two source images (neutral and expressive face) and changing the expression in a target image (neutral face). The results suggest that assignment of realism depends on the type of image (real or synthetic). However, the synthesis presents images that can be considered realistic, especially for the expression of happiness. Finally, while factors such as color difference between subsequent regions and unnatural-sized facial features contribute to less realism, other factors such as the presence of wrinkles contribute to a greater assignment of realism to images.
合成面部表情的一种方法是改变图像来表示期望的情绪,这在娱乐,诊断和精神疾病治疗中很有用。尽管有几种现有的方法,但关于合成面部表情图像中促进或阻碍真实感感知的因素的讨论很少。本文提出了一种通过面部特征变形合成面部表情的方法,并通过155名志愿者对合成图像的真实感进行了评价。所提出的面部表情合成的目的是利用两个源图像(中性脸和表情脸)和改变目标图像(中性脸)中的表情来生成新图像。结果表明,真实感的分配取决于图像的类型(真实或合成)。然而,合成的图像可以被认为是现实的,特别是对于幸福的表达。最后,虽然诸如后续区域之间的色差和不自然大小的面部特征等因素会降低真实感,但其他因素,如皱纹的存在,会使图像具有更大的真实感。
{"title":"Factors Influencing the Perception of Realism in Synthetic Facial Expressions","authors":"R. L. Testa, Ariane Machado-Lima, Fátima L. S. Nunes","doi":"10.1109/SIBGRAPI.2018.00045","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00045","url":null,"abstract":"One way to synthesize facial expressions is to change an image to represent the desired emotion and it is useful in entertainment, diagnostic and psychiatric disorders therapy applications. Despite several existing approaches, there is little discussion about factors that contribute or hinder the perception of realism in synthetic facial expressions images. After presenting an approach for facial expressions synthesis through the deformation of facial features, this paper provides an evaluation by 155 volunteers regarding the realism of synthesized images. The proposed facial expression synthesis aims to generate new images using two source images (neutral and expressive face) and changing the expression in a target image (neutral face). The results suggest that assignment of realism depends on the type of image (real or synthetic). However, the synthesis presents images that can be considered realistic, especially for the expression of happiness. Finally, while factors such as color difference between subsequent regions and unnatural-sized facial features contribute to less realism, other factors such as the presence of wrinkles contribute to a greater assignment of realism to images.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122210928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
360 Stitching from Dual-Fisheye Cameras Based on Feature Cluster Matching 基于特征聚类匹配的双鱼眼相机360度拼接
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00047
Tancredo Souza, R. Roberto, J. P. Lima, V. Teichrieb, J. Quintino, F. Q. Silva, André L. M. Santos, Helder Pinho
In the past years, captures made by dual-fisheye lens cameras have been used for virtual reality, 360 broadcasting and many other applications. For these scenarios, to provide a good- quality experience, the alignment of the boundaries between the two images to be stitched must be done properly. However, due to the peculiar design of dual-fisheye cameras and the high variance between different captured scenes, the stitching process can be very challenging. In this work, we present a 360 stitching solution based on feature cluster matching. It is an adaptive stitching technique based on the extraction of feature cluster templates from the stitching region. It is proposed an alignment based on the template matching of these clusters, successfully reducing the discontinuities in the full-view panorama. We evaluate our method on a dataset built from captures made with an existing camera of this kind, the Samsung's Gear 360. It is also described how we can extend these concepts from image stitching to video stitching using the temporal information of the media. Finally, we show that our matching method outperforms a state-of-the-art matching technique for image and video stitching.
在过去的几年里,双鱼眼镜头相机已经被用于虚拟现实,360度广播和许多其他应用。对于这些场景,为了提供高质量的体验,必须正确对齐待缝合的两幅图像之间的边界。然而,由于双鱼眼相机的特殊设计和不同拍摄场景之间的高度差异,拼接过程可能非常具有挑战性。本文提出了一种基于特征聚类匹配的360度拼接方案。它是一种基于从拼接区域提取特征聚类模板的自适应拼接技术。提出了一种基于这些簇的模板匹配的对齐方法,成功地减少了全景图中的不连续现象。我们在一个数据集上评估了我们的方法,这个数据集是用现有的三星Gear 360相机拍摄的。还描述了如何利用媒体的时间信息将这些概念从图像拼接扩展到视频拼接。最后,我们证明了我们的匹配方法优于最先进的图像和视频拼接匹配技术。
{"title":"360 Stitching from Dual-Fisheye Cameras Based on Feature Cluster Matching","authors":"Tancredo Souza, R. Roberto, J. P. Lima, V. Teichrieb, J. Quintino, F. Q. Silva, André L. M. Santos, Helder Pinho","doi":"10.1109/SIBGRAPI.2018.00047","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00047","url":null,"abstract":"In the past years, captures made by dual-fisheye lens cameras have been used for virtual reality, 360 broadcasting and many other applications. For these scenarios, to provide a good- quality experience, the alignment of the boundaries between the two images to be stitched must be done properly. However, due to the peculiar design of dual-fisheye cameras and the high variance between different captured scenes, the stitching process can be very challenging. In this work, we present a 360 stitching solution based on feature cluster matching. It is an adaptive stitching technique based on the extraction of feature cluster templates from the stitching region. It is proposed an alignment based on the template matching of these clusters, successfully reducing the discontinuities in the full-view panorama. We evaluate our method on a dataset built from captures made with an existing camera of this kind, the Samsung's Gear 360. It is also described how we can extend these concepts from image stitching to video stitching using the temporal information of the media. Finally, we show that our matching method outperforms a state-of-the-art matching technique for image and video stitching.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122347751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Biometric Recognition in Surveillance Environments Using Master-Slave Architectures 使用主从架构的监控环境中的生物特征识别
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00068
Hugo Proença, J. Neves
The number of visual surveillance systems deployed worldwide has been growing astoundingly. As a result, attempts have been made to increase the levels of automated analysis of such systems, towards the reliable recognition of human beings in fully covert conditions. Among other possibilities, master-slave architectures can be used to acquire high resolution data of subjects heads from large distances, with enough resolution to perform face recognition. This paper/tutorial provides a compre-hensive overview of the major phases behind the development of a recognition system working in outdoor surveillance scenarios, describing frameworks and methods to: 1) use coupled wide view and Pan-Tilt-Zoom (PTZ) imaging devices in surveillance settings, with a wide-view camera covering the whole scene, while a synchronized PTZ device collects high-resolution data from the head region; 2) use soft biometric information (e.g., body metrology and gait) for pruning the set of potential identities for each query; and 3) faithfully balance ethics/privacy and safety/security issues in this kind of systems.
全球部署的视觉监控系统的数量一直在惊人地增长。因此,人们试图提高这种系统的自动分析水平,以便在完全隐蔽的条件下可靠地识别人类。在其他可能性中,主从架构可以用于从远距离获取受试者头部的高分辨率数据,具有足够的分辨率来执行人脸识别。本文/教程全面概述了在户外监控场景中工作的识别系统开发背后的主要阶段,描述了框架和方法:1)在监控设置中使用耦合宽视图和Pan-Tilt-Zoom (PTZ)成像设备,宽视图相机覆盖整个场景,而同步PTZ设备从头部区域收集高分辨率数据;2)使用软生物特征信息(如身体计量和步态)对每个查询的潜在身份集进行修剪;3)在这种系统中忠实地平衡道德/隐私和安全/保障问题。
{"title":"Biometric Recognition in Surveillance Environments Using Master-Slave Architectures","authors":"Hugo Proença, J. Neves","doi":"10.1109/SIBGRAPI.2018.00068","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00068","url":null,"abstract":"The number of visual surveillance systems deployed worldwide has been growing astoundingly. As a result, attempts have been made to increase the levels of automated analysis of such systems, towards the reliable recognition of human beings in fully covert conditions. Among other possibilities, master-slave architectures can be used to acquire high resolution data of subjects heads from large distances, with enough resolution to perform face recognition. This paper/tutorial provides a compre-hensive overview of the major phases behind the development of a recognition system working in outdoor surveillance scenarios, describing frameworks and methods to: 1) use coupled wide view and Pan-Tilt-Zoom (PTZ) imaging devices in surveillance settings, with a wide-view camera covering the whole scene, while a synchronized PTZ device collects high-resolution data from the head region; 2) use soft biometric information (e.g., body metrology and gait) for pruning the set of potential identities for each query; and 3) faithfully balance ethics/privacy and safety/security issues in this kind of systems.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122686752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Divide-and-Conquer Clustering Approach Based on Optimum-Path Forest 基于最优路径森林的分而治之聚类方法
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00060
Adan Echemendia Montero, A. Falcão
Data clustering is one of the main challenges when solving Data Science problems. Despite its progress over almost one century of research, clustering algorithms still fail in identifying groups naturally related to the semantics of the problem. Moreover, the technological advances add crucial challenges with a considerable data increase, which are not handled by most techniques. We address these issues by proposing a divide-and-conquer approach to a clustering technique, which is unique in finding one group per dome of the probability density function of the data — the Optimum-Path Forest (OPF) clustering algorithm. Our approach can use all samples, or at least many samples, in the unsupervised learning process without affecting the grouping performance and, therefore, being less likely to lose relevant grouping information. We show that it can obtain satisfactory results when segmenting natural images into superpixels.
数据聚类是解决数据科学问题的主要挑战之一。尽管在近一个世纪的研究中取得了进展,聚类算法在识别与问题语义自然相关的组方面仍然失败。此外,技术进步带来了大量数据的增加,这是大多数技术无法解决的关键挑战。我们通过提出一种分而治之的聚类技术方法来解决这些问题,该方法的独特之处在于在数据的概率密度函数的每个圆中找到一组-最优路径森林(OPF)聚类算法。我们的方法可以在无监督学习过程中使用所有样本,或者至少是许多样本,而不会影响分组性能,因此,不太可能丢失相关的分组信息。实验结果表明,该方法在对自然图像进行超像素分割时可以获得满意的结果。
{"title":"A Divide-and-Conquer Clustering Approach Based on Optimum-Path Forest","authors":"Adan Echemendia Montero, A. Falcão","doi":"10.1109/SIBGRAPI.2018.00060","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00060","url":null,"abstract":"Data clustering is one of the main challenges when solving Data Science problems. Despite its progress over almost one century of research, clustering algorithms still fail in identifying groups naturally related to the semantics of the problem. Moreover, the technological advances add crucial challenges with a considerable data increase, which are not handled by most techniques. We address these issues by proposing a divide-and-conquer approach to a clustering technique, which is unique in finding one group per dome of the probability density function of the data — the Optimum-Path Forest (OPF) clustering algorithm. Our approach can use all samples, or at least many samples, in the unsupervised learning process without affecting the grouping performance and, therefore, being less likely to lose relevant grouping information. We show that it can obtain satisfactory results when segmenting natural images into superpixels.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125116876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Asynchronous Stroboscopic Structured Lighting Image Processing Using Low-Cost Cameras 使用低成本相机的异步频闪结构照明图像处理
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00048
F. H. Borsato, C. Morimoto
Structured lighting (SL) image processing relies on the generation of known illumination patterns synchronized with the camera frame rate and is commonly implemented using syncing capable cameras. In general, such cameras employ global shutters, that exposes the whole frame at once. However, most modern digital cameras use rolling shutters, which expose each line at different intervals, impairing most structured lighting applications. In this paper we introduce an asynchronous SL technique that can be used by any rolling shutter digital camera. While the use of stroboscopic illumination partially solves for the line exposure shift, the phase difference between the camera and lighting clocks results in stripe artifacts that move vertically in the video stream. These stripes are detected and tracked using a Kalman filter. Two asynchronous stroboscopic SL methods are proposed. The first method, image differencing, minimizes the stripe artifacts. The second method, image compositing, completely removes the artifacts. We demonstrate the use of the asynchronous differential lighting technique in a pupil detector using a low-cost high-speed camera with no synchronization means, with the lighting running independently at a higher, unknown frequency to the application.
结构化照明(SL)图像处理依赖于生成与相机帧速率同步的已知照明模式,通常使用具有同步功能的相机来实现。一般来说,这种相机采用全局快门,一次曝光整个画面。然而,大多数现代数码相机使用卷帘式快门,以不同的间隔曝光每条线,损害了大多数结构化照明的应用。本文介绍了一种可用于任意卷帘式数码相机的异步SL技术。虽然频闪照明的使用部分解决了线曝光偏移,但相机和照明时钟之间的相位差导致条纹伪影在视频流中垂直移动。这些条纹是用卡尔曼滤波器检测和跟踪的。提出了两种异步频闪SL方法。第一种方法,图像差分,最大限度地减少条纹伪影。第二种方法,图像合成,完全消除了伪影。我们演示了在瞳孔检测器中使用异步差分照明技术,该技术使用低成本高速相机,没有同步手段,照明以更高的未知频率独立运行。
{"title":"Asynchronous Stroboscopic Structured Lighting Image Processing Using Low-Cost Cameras","authors":"F. H. Borsato, C. Morimoto","doi":"10.1109/SIBGRAPI.2018.00048","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00048","url":null,"abstract":"Structured lighting (SL) image processing relies on the generation of known illumination patterns synchronized with the camera frame rate and is commonly implemented using syncing capable cameras. In general, such cameras employ global shutters, that exposes the whole frame at once. However, most modern digital cameras use rolling shutters, which expose each line at different intervals, impairing most structured lighting applications. In this paper we introduce an asynchronous SL technique that can be used by any rolling shutter digital camera. While the use of stroboscopic illumination partially solves for the line exposure shift, the phase difference between the camera and lighting clocks results in stripe artifacts that move vertically in the video stream. These stripes are detected and tracked using a Kalman filter. Two asynchronous stroboscopic SL methods are proposed. The first method, image differencing, minimizes the stripe artifacts. The second method, image compositing, completely removes the artifacts. We demonstrate the use of the asynchronous differential lighting technique in a pupil detector using a low-cost high-speed camera with no synchronization means, with the lighting running independently at a higher, unknown frequency to the application.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114559614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep Instance Segmentation of Teeth in Panoramic X-Ray Images 全景x射线图像中牙齿的深度实例分割
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00058
Gil Jader, Jefferson Fontineli, Marco Ruiz, Kalyf Abdalla, M. Pithon, Luciano Oliveira
In dentistry, radiological examinations help specialists by showing structure of the tooth bones with the goal of screening embedded teeth, bone abnormalities, cysts, tumors, infections, fractures, problems in the temporomandibular regions, just to cite a few. Sometimes, relying solely in the specialist's opinion can bring differences in the diagnoses, which can ultimately hinder the treatment. Although tools for complete automatic diagnosis are no yet expected, image pattern recognition has evolved towards decision support, mainly starting with the detection of teeth and their components in X-ray images. Tooth detection has been object of research during at least the last two decades, mainly relying in threshold and region-based methods. Following a different direction, this paper proposes to explore a deep learning method for instance segmentation of the teeth. To the best of our knowledge, it is the first system that detects and segment each tooth in panoramic X-ray images. It is noteworthy that this image type is the most challenging one to isolate teeth, since it shows other parts of patient's body (e.g., chin, spine and jaws). We propose a segmentation system based on mask region-based convolutional neural network to accomplish an instance segmentation. Performance was thoroughly assessed from a 1500 challenging image data set, with high variation and containing 10 categories of different types of buccal image. By training the proposed system with only 193 images of mouth containing 32 teeth in average, using transfer learning strategies, we achieved 98% of accuracy, 88% of F1-score, 94% of precision, 84% of recall and 99% of specificity over 1224 unseen images, results very superior than other 10 unsupervised methods.
在牙科中,放射检查通过显示牙齿骨骼的结构来帮助专家,目的是筛查嵌套的牙齿,骨骼异常,囊肿,肿瘤,感染,骨折,颞下颌区问题,仅举几例。有时,仅仅依靠专家的意见可能会导致诊断的差异,这最终会阻碍治疗。虽然目前还没有完全自动诊断的工具,但图像模式识别已经向决策支持发展,主要是从检测x射线图像中的牙齿及其成分开始。至少在过去的二十年里,牙齿检测一直是研究的对象,主要依赖于阈值和基于区域的方法。在此基础上,本文提出了一种基于深度学习的牙齿实例分割方法。据我们所知,这是第一个在全景x射线图像中检测和分割每颗牙齿的系统。值得注意的是,这种图像类型是分离牙齿最具挑战性的,因为它显示了患者身体的其他部位(例如下巴,脊柱和颌骨)。提出了一种基于掩模区域的卷积神经网络分割系统来实现实例分割。性能从1500具挑战性的图像数据集进行了彻底的评估,这些数据集变化很大,包含10类不同类型的口腔图像。通过使用迁移学习策略,对平均包含32颗牙齿的193张口腔图像进行训练,我们在1224张未见过的图像上实现了98%的准确率、88%的f1分数、94%的准确率、84%的召回率和99%的特异性,结果明显优于其他10种无监督方法。
{"title":"Deep Instance Segmentation of Teeth in Panoramic X-Ray Images","authors":"Gil Jader, Jefferson Fontineli, Marco Ruiz, Kalyf Abdalla, M. Pithon, Luciano Oliveira","doi":"10.1109/SIBGRAPI.2018.00058","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00058","url":null,"abstract":"In dentistry, radiological examinations help specialists by showing structure of the tooth bones with the goal of screening embedded teeth, bone abnormalities, cysts, tumors, infections, fractures, problems in the temporomandibular regions, just to cite a few. Sometimes, relying solely in the specialist's opinion can bring differences in the diagnoses, which can ultimately hinder the treatment. Although tools for complete automatic diagnosis are no yet expected, image pattern recognition has evolved towards decision support, mainly starting with the detection of teeth and their components in X-ray images. Tooth detection has been object of research during at least the last two decades, mainly relying in threshold and region-based methods. Following a different direction, this paper proposes to explore a deep learning method for instance segmentation of the teeth. To the best of our knowledge, it is the first system that detects and segment each tooth in panoramic X-ray images. It is noteworthy that this image type is the most challenging one to isolate teeth, since it shows other parts of patient's body (e.g., chin, spine and jaws). We propose a segmentation system based on mask region-based convolutional neural network to accomplish an instance segmentation. Performance was thoroughly assessed from a 1500 challenging image data set, with high variation and containing 10 categories of different types of buccal image. By training the proposed system with only 193 images of mouth containing 32 teeth in average, using transfer learning strategies, we achieved 98% of accuracy, 88% of F1-score, 94% of precision, 84% of recall and 99% of specificity over 1224 unseen images, results very superior than other 10 unsupervised methods.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117267769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 118
Active Learning Approaches for Deforested Area Classification 主动学习方法在森林砍伐面积分类中的应用
Pub Date : 2018-10-01 DOI: 10.1109/SIBGRAPI.2018.00013
F. B. J. R. Dallaqua, F. Faria, Á. Fazenda
The conservation of tropical forests is a social and ecological relevant subject because of its important role in the global ecosystem. Forest monitoring is mostly done by extraction and analysis of remote sensing imagery (RSI) information. In the literature many works have been successful in remote sensing image classification through the use of machine learning techniques. Generally, traditional learning algorithms demand a representative and huge training set which can be an expensive procedure, especially in RSI, where the imagery spectrum varies along seasons and forest coverage. A semi-supervised learning paradigm known as active learning (AL) is proposed to solve this problem, as it builds efficient training sets through iterative improvement of the model performance. In the construction process of training sets, unlabeled samples are evaluated by a user-defined heuristic, ranked and then the most relevant samples are labeled by an expert user. In this work two different AL approaches (Confidence Heuristics and Committee) are presented to classify remote sensing imagery. In the experiments, our AL approaches achieve excellent effectiveness results compared with well-known approaches existing in the literature for two different datasets.
由于热带森林在全球生态系统中的重要作用,其保护是一个与社会和生态相关的课题。森林监测主要是通过遥感影像信息的提取和分析来完成的。在文献中,许多工作通过使用机器学习技术在遥感图像分类方面取得了成功。一般来说,传统的学习算法需要一个具有代表性的庞大的训练集,这可能是一个昂贵的过程,特别是在RSI中,其中图像频谱随季节和森林覆盖率而变化。提出了一种称为主动学习(AL)的半监督学习范式来解决这个问题,因为它通过迭代改进模型性能来构建有效的训练集。在训练集的构建过程中,通过用户自定义启发式算法对未标记的样本进行评估,排序,然后由专家用户对最相关的样本进行标记。在这项工作中,提出了两种不同的人工智能方法(置信启发式和委员会)来分类遥感图像。在实验中,与文献中已有的知名方法相比,我们的人工智能方法在两个不同的数据集上取得了优异的有效性结果。
{"title":"Active Learning Approaches for Deforested Area Classification","authors":"F. B. J. R. Dallaqua, F. Faria, Á. Fazenda","doi":"10.1109/SIBGRAPI.2018.00013","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00013","url":null,"abstract":"The conservation of tropical forests is a social and ecological relevant subject because of its important role in the global ecosystem. Forest monitoring is mostly done by extraction and analysis of remote sensing imagery (RSI) information. In the literature many works have been successful in remote sensing image classification through the use of machine learning techniques. Generally, traditional learning algorithms demand a representative and huge training set which can be an expensive procedure, especially in RSI, where the imagery spectrum varies along seasons and forest coverage. A semi-supervised learning paradigm known as active learning (AL) is proposed to solve this problem, as it builds efficient training sets through iterative improvement of the model performance. In the construction process of training sets, unlabeled samples are evaluated by a user-defined heuristic, ranked and then the most relevant samples are labeled by an expert user. In this work two different AL approaches (Confidence Heuristics and Committee) are presented to classify remote sensing imagery. In the experiments, our AL approaches achieve excellent effectiveness results compared with well-known approaches existing in the literature for two different datasets.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134358909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1