首页 > 最新文献

IEEE International Conference on Automatic Face & Gesture Recognition and Workshops最新文献

英文 中文
16th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2021, Jodhpur, India, December 15-18, 2021 第16届IEEE自动人脸和手势识别国际会议,FG 2021,焦特布尔,印度,2021年12月15日至18日
{"title":"16th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2021, Jodhpur, India, December 15-18, 2021","authors":"","doi":"10.1109/FG52635.2021","DOIUrl":"https://doi.org/10.1109/FG52635.2021","url":null,"abstract":"","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90760674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Message from the General and Program Chairs FG 2020 2020年FG总主席和项目主席的讲话
Pub Date : 2020-01-01 DOI: 10.1109/FG47880.2020.00150
J. Wachs, Sergio Escalera, J. Cohn, A. A. Salah, Arun Ross
{"title":"Message from the General and Program Chairs FG 2020","authors":"J. Wachs, Sergio Escalera, J. Cohn, A. A. Salah, Arun Ross","doi":"10.1109/FG47880.2020.00150","DOIUrl":"https://doi.org/10.1109/FG47880.2020.00150","url":null,"abstract":"","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74810060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantificational Subordination as Anaphora to a Function 作为功能回指的数量从属关系
Pub Date : 2019-08-10 DOI: 10.1007/978-3-662-59648-7_4
Matthew Gotham
{"title":"Quantificational Subordination as Anaphora to a Function","authors":"Matthew Gotham","doi":"10.1007/978-3-662-59648-7_4","DOIUrl":"https://doi.org/10.1007/978-3-662-59648-7_4","url":null,"abstract":"","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85351527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure Sensitive Tier Projection: Applications and Formal Properties 结构敏感层投影:应用和形式属性
Pub Date : 2019-08-10 DOI: 10.1007/978-3-662-59648-7_3
Aniello De Santo, T. Graf
{"title":"Structure Sensitive Tier Projection: Applications and Formal Properties","authors":"Aniello De Santo, T. Graf","doi":"10.1007/978-3-662-59648-7_3","DOIUrl":"https://doi.org/10.1007/978-3-662-59648-7_3","url":null,"abstract":"","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88951468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Undecidability of a Newly Proposed Calculus for CatLog3 CatLog3的一种新微积分的不可判定性
Pub Date : 2019-08-10 DOI: 10.1007/978-3-662-59648-7_5
M. Kanovich, S. Kuznetsov, A. Scedrov
{"title":"Undecidability of a Newly Proposed Calculus for CatLog3","authors":"M. Kanovich, S. Kuznetsov, A. Scedrov","doi":"10.1007/978-3-662-59648-7_5","DOIUrl":"https://doi.org/10.1007/978-3-662-59648-7_5","url":null,"abstract":"","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77035425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Topos-Based Approach to Building Language Ontologies 基于拓扑的语言本体构建方法
Pub Date : 2019-08-10 DOI: 10.1007/978-3-662-59648-7_2
William Babonnaud
{"title":"A Topos-Based Approach to Building Language Ontologies","authors":"William Babonnaud","doi":"10.1007/978-3-662-59648-7_2","DOIUrl":"https://doi.org/10.1007/978-3-662-59648-7_2","url":null,"abstract":"","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84964855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Proof-Theoretic Aspects of Hybrid Type-Logical Grammars 混合类型逻辑语法的理论证明
Pub Date : 2019-08-10 DOI: 10.1007/978-3-662-59648-7_6
R. Moot, S. Stevens-Guille
{"title":"Proof-Theoretic Aspects of Hybrid Type-Logical Grammars","authors":"R. Moot, S. Stevens-Guille","doi":"10.1007/978-3-662-59648-7_6","DOIUrl":"https://doi.org/10.1007/978-3-662-59648-7_6","url":null,"abstract":"","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76506945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Purely Surface-Oriented Approach to Handling Arabic Morphology 一个纯粹的面向表面的方法来处理阿拉伯形态学
Pub Date : 2019-08-10 DOI: 10.1007/978-3-662-59648-7_1
Yousuf Aboamer, M. Kracht
{"title":"A Purely Surface-Oriented Approach to Handling Arabic Morphology","authors":"Yousuf Aboamer, M. Kracht","doi":"10.1007/978-3-662-59648-7_1","DOIUrl":"https://doi.org/10.1007/978-3-662-59648-7_1","url":null,"abstract":"","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82387609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Computational Complexity of Head Movement and Affix Hopping 头部运动和词缀跳跃的计算复杂度研究
Pub Date : 2019-08-10 DOI: 10.1007/978-3-662-59648-7_7
Miloš Stanojević
{"title":"On the Computational Complexity of Head Movement and Affix Hopping","authors":"Miloš Stanojević","doi":"10.1007/978-3-662-59648-7_7","DOIUrl":"https://doi.org/10.1007/978-3-662-59648-7_7","url":null,"abstract":"","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77756047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Improving Viseme Recognition with GAN-based Muti-view Mapping 基于gan的多视图映射改进Viseme识别
Dario Augusto Borges Oliveira, Andréa Britto Mattos, E. Morais
Speech recognition technologies in the visual domain currently can only identify words and sentences in still images. Identifying visemes (i.e., the smallest visual units of spoken text) is useful when there are no language models or dictionaries available, which is often the case for languages besides English; however, it is a challenge, as temporal information cannot be extracted. In parallel, previous works demonstrated that exploring data acquired simultaneously under multiple views can improve the recognition accuracy in comparison to single-view data. For many different applications, however, most of the available audio-visual datasets are obtained from a single view, essentially due to acquisition limitations. In this work, we address viseme recognition in still images and explore the synthetic generation of additional views to improve overall accuracy. For that, we use Generative Adversarial Networks (GANs) trained with synthetic data and map from mouth images acquired in a single arbitrary view to frontal and side views – in which the face is rotated vertically at approximately 30°, 45°, and 60°. Then, we use a state-of-art Convolutional Neural Network for classifying the visemes and compare its performance when training only with the original single-view images versus training with the additional views artificially generated by the GANs. We run experiments using three audiovisual corpora acquired under different conditions (GRID, AVICAR, and OuluVS2 datasets) and our results indicate that the additional views synthesized by the GANs are able to improve the viseme recognition accuracy on all tested scenarios.
视觉领域的语音识别技术目前只能识别静止图像中的单词和句子。当没有可用的语言模型或字典时,识别语素(即口语文本的最小视觉单位)是有用的,这通常是英语以外的语言的情况;然而,这是一个挑战,因为不能提取时间信息。同时,先前的研究表明,与单视图数据相比,在多个视图下同时获取数据可以提高识别精度。然而,对于许多不同的应用,大多数可用的视听数据集都是从单一视图获得的,这主要是由于获取限制。在这项工作中,我们解决了静态图像中的视觉识别问题,并探索了额外视图的合成生成,以提高整体准确性。为此,我们使用生成式对抗网络(GANs)训练合成数据和从单个任意视图获取的嘴部图像到正面和侧面视图的地图,其中面部垂直旋转约30°,45°和60°。然后,我们使用最先进的卷积神经网络对视点进行分类,并比较仅使用原始单视图图像进行训练与使用gan人工生成的附加视图进行训练时的性能。我们使用在不同条件下获得的三个视听语料库(GRID、AVICAR和OuluVS2数据集)进行了实验,结果表明,gan合成的附加视图能够提高所有测试场景下的视觉识别精度。
{"title":"Improving Viseme Recognition with GAN-based Muti-view Mapping","authors":"Dario Augusto Borges Oliveira, Andréa Britto Mattos, E. Morais","doi":"10.1109/FG.2019.8756589","DOIUrl":"https://doi.org/10.1109/FG.2019.8756589","url":null,"abstract":"Speech recognition technologies in the visual domain currently can only identify words and sentences in still images. Identifying visemes (i.e., the smallest visual units of spoken text) is useful when there are no language models or dictionaries available, which is often the case for languages besides English; however, it is a challenge, as temporal information cannot be extracted. In parallel, previous works demonstrated that exploring data acquired simultaneously under multiple views can improve the recognition accuracy in comparison to single-view data. For many different applications, however, most of the available audio-visual datasets are obtained from a single view, essentially due to acquisition limitations. In this work, we address viseme recognition in still images and explore the synthetic generation of additional views to improve overall accuracy. For that, we use Generative Adversarial Networks (GANs) trained with synthetic data and map from mouth images acquired in a single arbitrary view to frontal and side views – in which the face is rotated vertically at approximately 30°, 45°, and 60°. Then, we use a state-of-art Convolutional Neural Network for classifying the visemes and compare its performance when training only with the original single-view images versus training with the additional views artificially generated by the GANs. We run experiments using three audiovisual corpora acquired under different conditions (GRID, AVICAR, and OuluVS2 datasets) and our results indicate that the additional views synthesized by the GANs are able to improve the viseme recognition accuracy on all tested scenarios.","PeriodicalId":91494,"journal":{"name":"IEEE International Conference on Automatic Face & Gesture Recognition and Workshops","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74020405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
IEEE International Conference on Automatic Face & Gesture Recognition and Workshops
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1