首页 > 最新文献

2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T)最新文献

英文 中文
Perfect Storm: DSAs Embrace Deep Learning for GPU-Based Computer Vision 完美风暴:dsa拥抱基于gpu的计算机视觉深度学习
Pub Date : 2019-10-01 DOI: 10.1109/SIBGRAPI-T.2019.00007
M. Pias, S. Botelho, Paulo L. J. Drews-Jr
This paper explores Domain-Specific Deep Learning Architectures for GPU Computer Vision through a "brainstorming" approach on selected hands-on topics in the area. We intend to discuss Deep Neural Networks (DNNs) to image classification problems through tools, frameworks and data pipelines commonly used to train and deploy DNNs in GPUs and Domain-Specific Architectures (DSAs).
本文通过对该领域选定的实践主题的“头脑风暴”方法,探讨了GPU计算机视觉的特定领域深度学习架构。我们打算通过通常用于在gpu和特定领域架构(dsa)中训练和部署深度神经网络(dnn)的工具、框架和数据管道来讨论深度神经网络(dnn)图像分类问题。
{"title":"Perfect Storm: DSAs Embrace Deep Learning for GPU-Based Computer Vision","authors":"M. Pias, S. Botelho, Paulo L. J. Drews-Jr","doi":"10.1109/SIBGRAPI-T.2019.00007","DOIUrl":"https://doi.org/10.1109/SIBGRAPI-T.2019.00007","url":null,"abstract":"This paper explores Domain-Specific Deep Learning Architectures for GPU Computer Vision through a \"brainstorming\" approach on selected hands-on topics in the area. We intend to discuss Deep Neural Networks (DNNs) to image classification problems through tools, frameworks and data pipelines commonly used to train and deploy DNNs in GPUs and Domain-Specific Architectures (DSAs).","PeriodicalId":371584,"journal":{"name":"2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126463755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Message from the Tutorial Program Chairs 来自指导项目主席的信息
Pub Date : 2019-10-01 DOI: 10.1109/sibgrapi-t.2019.00005
{"title":"Message from the Tutorial Program Chairs","authors":"","doi":"10.1109/sibgrapi-t.2019.00005","DOIUrl":"https://doi.org/10.1109/sibgrapi-t.2019.00005","url":null,"abstract":"","PeriodicalId":371584,"journal":{"name":"2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122655978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast-Forward Methods for Egocentric Videos: A Review 自我中心视频的快进方法:综述
Pub Date : 2019-10-01 DOI: 10.1109/SIBGRAPI-T.2019.00009
M. Silva, W. Ramos, Alan C. Neves, Edson Roteia Araujo Junior, M. Campos, E. R. Nascimento
The emergence of low-cost, high-quality personal wearable cameras combined with a large and increasing storage capacity of video-sharing websites have evoked a growing interest in first-person videos. A First-Person Video is usually composed of monotonous long-running unedited streams captured by a device attached to the user body, which makes it visually unpleasant and tedious to watch. Thus, there is a rise in the need to provide quick access to the information therein. In the last few years, a popular approach to retrieve the information from videos is to produce a short version of the input video by creating a video summary; however, this approach disrupts the temporal context of the recording. Fast-Forward is another approach that creates a shorter version of the video preserving the video context by increasing its playback speed. Although Fast-Forward methods keep the recording story, they do not consider the semantic load of the input video. The Semantic Fast-Forward approach creates a shorter version of First-Person Videos dealing with both video context and emphasis of the relevant portions to keep the semantic load of the input video. In this paper, we present a review of the representative methods in both fast-forward and semantic fast-forward methods and discuss the future directions of the area.
低成本、高质量的个人可穿戴相机的出现,加上视频分享网站的大容量和不断增长的存储容量,引起了人们对第一人称视频日益增长的兴趣。第一人称视频通常由连接到用户身体上的设备捕获的单调的长时间未编辑的流组成,这使得它在视觉上令人不快且观看乏味。因此,提供对其中的信息的快速访问的需求增加了。在过去的几年里,从视频中检索信息的一种流行的方法是通过创建视频摘要来制作输入视频的简短版本;然而,这种方法破坏了记录的时间背景。快进是另一种方法,它通过提高播放速度来创建更短的视频版本,以保留视频上下文。虽然快进方法保留了记录故事,但它们不考虑输入视频的语义负载。语义快进方法创造了第一人称视频的较短版本,同时处理视频上下文和相关部分的强调,以保持输入视频的语义负荷。在本文中,我们对快进和语义快进的代表性方法进行了综述,并讨论了该领域的未来发展方向。
{"title":"Fast-Forward Methods for Egocentric Videos: A Review","authors":"M. Silva, W. Ramos, Alan C. Neves, Edson Roteia Araujo Junior, M. Campos, E. R. Nascimento","doi":"10.1109/SIBGRAPI-T.2019.00009","DOIUrl":"https://doi.org/10.1109/SIBGRAPI-T.2019.00009","url":null,"abstract":"The emergence of low-cost, high-quality personal wearable cameras combined with a large and increasing storage capacity of video-sharing websites have evoked a growing interest in first-person videos. A First-Person Video is usually composed of monotonous long-running unedited streams captured by a device attached to the user body, which makes it visually unpleasant and tedious to watch. Thus, there is a rise in the need to provide quick access to the information therein. In the last few years, a popular approach to retrieve the information from videos is to produce a short version of the input video by creating a video summary; however, this approach disrupts the temporal context of the recording. Fast-Forward is another approach that creates a shorter version of the video preserving the video context by increasing its playback speed. Although Fast-Forward methods keep the recording story, they do not consider the semantic load of the input video. The Semantic Fast-Forward approach creates a shorter version of First-Person Videos dealing with both video context and emphasis of the relevant portions to keep the semantic load of the input video. In this paper, we present a review of the representative methods in both fast-forward and semantic fast-forward methods and discuss the future directions of the area.","PeriodicalId":371584,"journal":{"name":"2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126237397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey of Transfer Learning for Convolutional Neural Networks 卷积神经网络迁移学习研究综述
Pub Date : 2019-10-01 DOI: 10.1109/SIBGRAPI-T.2019.00010
R. Ribani, M. Marengoni
Transfer learning is an emerging topic that may drive the success of machine learning in research and industry. The lack of data on specific tasks is one of the main reasons to use it, since collecting and labeling data can be very expensive and can take time, and recent concerns with privacy make difficult to use real data from users. The use of transfer learning helps to fast prototype new machine learning models using pre-trained models from a source task since training on millions of images can take time and requires expensive GPUs. In this survey, we review the concepts and definitions related to transfer learning and we list the different terms used in the literature. We bring the point of view from different authors of prior surveys, adding some more recent findings in order to give a clear vision of directions for future work in this field of research.
迁移学习是一个新兴的话题,它可能会推动机器学习在研究和工业中的成功。缺乏特定任务的数据是使用它的主要原因之一,因为收集和标记数据可能非常昂贵且需要时间,而且最近对隐私的担忧使得难以使用来自用户的真实数据。迁移学习的使用有助于使用来自源任务的预训练模型快速原型化新的机器学习模型,因为对数百万张图像进行训练可能需要时间,并且需要昂贵的gpu。在本调查中,我们回顾了迁移学习的相关概念和定义,并列出了文献中使用的不同术语。我们从之前的调查中引入了不同作者的观点,并添加了一些最近的发现,以便为该研究领域的未来工作提供一个清晰的方向。
{"title":"A Survey of Transfer Learning for Convolutional Neural Networks","authors":"R. Ribani, M. Marengoni","doi":"10.1109/SIBGRAPI-T.2019.00010","DOIUrl":"https://doi.org/10.1109/SIBGRAPI-T.2019.00010","url":null,"abstract":"Transfer learning is an emerging topic that may drive the success of machine learning in research and industry. The lack of data on specific tasks is one of the main reasons to use it, since collecting and labeling data can be very expensive and can take time, and recent concerns with privacy make difficult to use real data from users. The use of transfer learning helps to fast prototype new machine learning models using pre-trained models from a source task since training on millions of images can take time and requires expensive GPUs. In this survey, we review the concepts and definitions related to transfer learning and we list the different terms used in the literature. We bring the point of view from different authors of prior surveys, adding some more recent findings in order to give a clear vision of directions for future work in this field of research.","PeriodicalId":371584,"journal":{"name":"2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123825625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 110
Title Page III 第三页
Pub Date : 2019-10-01 DOI: 10.1109/sibgrapi-t.2019.00002
{"title":"Title Page III","authors":"","doi":"10.1109/sibgrapi-t.2019.00002","DOIUrl":"https://doi.org/10.1109/sibgrapi-t.2019.00002","url":null,"abstract":"","PeriodicalId":371584,"journal":{"name":"2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T)","volume":"485 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123562486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Title Page I 第一页
Pub Date : 1990-05-01 DOI: 10.1109/estream.2019.8732164
{"title":"Title Page I","authors":"","doi":"10.1109/estream.2019.8732164","DOIUrl":"https://doi.org/10.1109/estream.2019.8732164","url":null,"abstract":"","PeriodicalId":371584,"journal":{"name":"2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T)","volume":"193 1-4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120928552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1