Jun Wan, Sergio Escalera, G. Anbarjafari, H. Escalante, Xavier Baró, Isabelle M Guyon, Meysam Madadi, J. Allik, Jelena Gorbova, Chi Lin, Yiliang Xie
{"title":"Results and Analysis of ChaLearn LAP Multi-modal Isolated and Continuous Gesture Recognition, and Real Versus Fake Expressed Emotions Challenges","authors":"Jun Wan, Sergio Escalera, G. Anbarjafari, H. Escalante, Xavier Baró, Isabelle M Guyon, Meysam Madadi, J. Allik, Jelena Gorbova, Chi Lin, Yiliang Xie","doi":"10.1109/ICCVW.2017.377","DOIUrl":null,"url":null,"abstract":"We analyze the results of the 2017 ChaLearn Looking at People Challenge at ICCV The challenge comprised three tracks: (1) large-scale isolated (2) continuous gesture recognition, and (3) real versus fake expressed emotions tracks. It is the second round for both gesture recognition challenges, which were held first in the context of the ICPR 2016 workshop on \"multimedia challenges beyond visual analysis\". In this second round, more participants joined the competitions, and the performances considerably improved compared to the first round. Particularly, the best recognition accuracy of isolated gesture recognition has improved from 56.90% to 67.71% in the IsoGD test set, and Mean Jaccard Index (MJI) of continuous gesture recognition has improved from 0.2869 to 0.6103 in the ConGD test set. The third track is the first challenge on real versus fake expressed emotion classification, including six emotion categories, for which a novel database was introduced. The first place was shared between two teams who achieved 67.70% averaged recognition rate on the test set. The data of the three tracks, the participants' code and method descriptions are publicly available to allow researchers to keep making progress in the field.","PeriodicalId":149766,"journal":{"name":"2017 IEEE International Conference on Computer Vision Workshops (ICCVW)","volume":"330 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"72","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Computer Vision Workshops (ICCVW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCVW.2017.377","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 72
Abstract
We analyze the results of the 2017 ChaLearn Looking at People Challenge at ICCV The challenge comprised three tracks: (1) large-scale isolated (2) continuous gesture recognition, and (3) real versus fake expressed emotions tracks. It is the second round for both gesture recognition challenges, which were held first in the context of the ICPR 2016 workshop on "multimedia challenges beyond visual analysis". In this second round, more participants joined the competitions, and the performances considerably improved compared to the first round. Particularly, the best recognition accuracy of isolated gesture recognition has improved from 56.90% to 67.71% in the IsoGD test set, and Mean Jaccard Index (MJI) of continuous gesture recognition has improved from 0.2869 to 0.6103 in the ConGD test set. The third track is the first challenge on real versus fake expressed emotion classification, including six emotion categories, for which a novel database was introduced. The first place was shared between two teams who achieved 67.70% averaged recognition rate on the test set. The data of the three tracks, the participants' code and method descriptions are publicly available to allow researchers to keep making progress in the field.
我们在ICCV上分析了2017年challearn look at People挑战的结果,该挑战包括三个方面:(1)大规模孤立的(2)连续的手势识别,以及(3)真实与虚假表达的情绪轨迹。这是手势识别挑战的第二轮,第一轮是在ICPR 2016“超越视觉分析的多媒体挑战”研讨会的背景下举行的。在第二轮比赛中,更多的参与者参加了比赛,成绩比第一轮有了很大的提高。特别是,在IsoGD测试集中,孤立手势识别的最佳识别准确率从56.90%提高到67.71%,在cond测试集中,连续手势识别的平均Jaccard指数(MJI)从0.2869提高到0.6103。第三轨道是真实与虚假表达情感分类的第一个挑战,包括六个情感类别,为此引入了一个新的数据库。第一名由两个团队共享,他们在测试集中的平均识别率达到67.70%。三个轨道的数据、参与者的代码和方法描述都是公开的,以便研究人员在该领域不断取得进展。