基于薄板样条和相对AU约束的细粒度微表达式生成

Sirui Zhao, Shukang Yin, Huaying Tang, Rijin Jin, Yifan Xu, Tong Xu, Enhong Chen
{"title":"基于薄板样条和相对AU约束的细粒度微表达式生成","authors":"Sirui Zhao, Shukang Yin, Huaying Tang, Rijin Jin, Yifan Xu, Tong Xu, Enhong Chen","doi":"10.1145/3503161.3551597","DOIUrl":null,"url":null,"abstract":"As a typical psychological stress reaction, micro-expression (ME) is usually quickly leaked on a human face and can reveal the true feeling and emotional cognition. Therefore,automatic ME analysis (MEA) has essential applications in safety, clinical and other fields. However, the lack of adequate ME data has severely hindered MEA research. To overcome this dilemma and encouraged by current image generation techniques, this paper proposes a fine-grained ME generation method to enhance ME data in terms of data volume and diversity. Specifically, we first estimate non-linear ME motion using thin-plate spline transformation with a dense motion network. Then, the estimated ME motion transformations, including optical flow and occlusion masks, are sent to the generation network to synthesize the target facial micro-expression. In particular, we obtain the relative action units (AUs) of the source ME to the target face as a constraint to encourage the network to ignore expression-irrelevant movements, thereby generating fine-grained MEs. Through comparative experiments on CASME II, SMIC and SAMM datasets, we demonstrate the effectiveness and superiority of our method. Source code is provided in https://github.com/MEA-LAB-421/MEGC2022-Generation.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Fine-grained Micro-Expression Generation based on Thin-Plate Spline and Relative AU Constraint\",\"authors\":\"Sirui Zhao, Shukang Yin, Huaying Tang, Rijin Jin, Yifan Xu, Tong Xu, Enhong Chen\",\"doi\":\"10.1145/3503161.3551597\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As a typical psychological stress reaction, micro-expression (ME) is usually quickly leaked on a human face and can reveal the true feeling and emotional cognition. Therefore,automatic ME analysis (MEA) has essential applications in safety, clinical and other fields. However, the lack of adequate ME data has severely hindered MEA research. To overcome this dilemma and encouraged by current image generation techniques, this paper proposes a fine-grained ME generation method to enhance ME data in terms of data volume and diversity. Specifically, we first estimate non-linear ME motion using thin-plate spline transformation with a dense motion network. Then, the estimated ME motion transformations, including optical flow and occlusion masks, are sent to the generation network to synthesize the target facial micro-expression. In particular, we obtain the relative action units (AUs) of the source ME to the target face as a constraint to encourage the network to ignore expression-irrelevant movements, thereby generating fine-grained MEs. Through comparative experiments on CASME II, SMIC and SAMM datasets, we demonstrate the effectiveness and superiority of our method. Source code is provided in https://github.com/MEA-LAB-421/MEGC2022-Generation.\",\"PeriodicalId\":412792,\"journal\":{\"name\":\"Proceedings of the 30th ACM International Conference on Multimedia\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 30th ACM International Conference on Multimedia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3503161.3551597\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 30th ACM International Conference on Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3503161.3551597","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

微表情(micro-expression, ME)是一种典型的心理应激反应,通常会在人的脸上迅速泄露出来,可以揭示真实的感受和情绪认知。因此,自动ME分析(MEA)在安全、临床等领域有着重要的应用。然而,缺乏足够的环境效应数据严重阻碍了环境效应研究。为了克服这一困境,并受到当前图像生成技术的鼓励,本文提出了一种细粒度的ME生成方法,以增强ME数据的数据量和多样性。具体来说,我们首先使用具有密集运动网络的薄板样条变换来估计非线性ME运动。然后,将估计的ME运动变换(包括光流和遮挡掩模)发送到生成网络,合成目标面部微表情。特别是,我们获得源ME与目标面部的相对动作单位(au)作为约束,以鼓励网络忽略与表情无关的运动,从而生成细粒度的ME。通过在CASME II、SMIC和SAMM数据集上的对比实验,验证了该方法的有效性和优越性。源代码在https://github.com/MEA-LAB-421/MEGC2022-Generation中提供。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Fine-grained Micro-Expression Generation based on Thin-Plate Spline and Relative AU Constraint
As a typical psychological stress reaction, micro-expression (ME) is usually quickly leaked on a human face and can reveal the true feeling and emotional cognition. Therefore,automatic ME analysis (MEA) has essential applications in safety, clinical and other fields. However, the lack of adequate ME data has severely hindered MEA research. To overcome this dilemma and encouraged by current image generation techniques, this paper proposes a fine-grained ME generation method to enhance ME data in terms of data volume and diversity. Specifically, we first estimate non-linear ME motion using thin-plate spline transformation with a dense motion network. Then, the estimated ME motion transformations, including optical flow and occlusion masks, are sent to the generation network to synthesize the target facial micro-expression. In particular, we obtain the relative action units (AUs) of the source ME to the target face as a constraint to encourage the network to ignore expression-irrelevant movements, thereby generating fine-grained MEs. Through comparative experiments on CASME II, SMIC and SAMM datasets, we demonstrate the effectiveness and superiority of our method. Source code is provided in https://github.com/MEA-LAB-421/MEGC2022-Generation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Adaptive Anti-Bottleneck Multi-Modal Graph Learning Network for Personalized Micro-video Recommendation Composite Photograph Harmonization with Complete Background Cues Domain-Specific Conditional Jigsaw Adaptation for Enhancing transferability and Discriminability Enabling Effective Low-Light Perception using Ubiquitous Low-Cost Visible-Light Cameras Restoration of Analog Videos Using Swin-UNet
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1