基于有限标记数据的深度卷积网络手和脸分割

Ozge Mercanoglu Sincan, Sinan Gencoglu, M. Bacak, H. Keles
{"title":"基于有限标记数据的深度卷积网络手和脸分割","authors":"Ozge Mercanoglu Sincan, Sinan Gencoglu, M. Bacak, H. Keles","doi":"10.1109/ISMSIT.2019.8932835","DOIUrl":null,"url":null,"abstract":"Segmentation is a crucial step for many classification problems. There are many researchers that approach the problem using classical computer vision methods, recently deep learning approaches have been used more frequently in different domains. In this paper, we propose two segmentation networks that mark face and hands from static images for sign language recognition using only a few training data. Our networks have encoder-decoder structure that contains convolutional, max pooling and upsampling layers; the first one is a U-Net based network and the second one is a VGG-based network. We evaluate our models on two sign language datasets; the first one is our Ankara University Turkish Sign Language dataset (AU-TSL) and the second one is Montalbano Italian gesture dataset. Datasets contain background and illumination variations. Also, they are recorded with different signers. We train our models using only 400 images that we randomly selected from video frames. Our experiments show that even when we reduce the training data in half, we can still obtain satisfactory results. Proposed methods have achieved more than 98% precision using 400 frames with both datasets. Our code is available at https://github.com/au-cvml-lab/Hands-and-Face-Segmentation-With-Limited-Data.","PeriodicalId":169791,"journal":{"name":"2019 3rd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Hand and Face Segmentation with Deep Convolutional Networks using Limited Labelled Data\",\"authors\":\"Ozge Mercanoglu Sincan, Sinan Gencoglu, M. Bacak, H. Keles\",\"doi\":\"10.1109/ISMSIT.2019.8932835\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Segmentation is a crucial step for many classification problems. There are many researchers that approach the problem using classical computer vision methods, recently deep learning approaches have been used more frequently in different domains. In this paper, we propose two segmentation networks that mark face and hands from static images for sign language recognition using only a few training data. Our networks have encoder-decoder structure that contains convolutional, max pooling and upsampling layers; the first one is a U-Net based network and the second one is a VGG-based network. We evaluate our models on two sign language datasets; the first one is our Ankara University Turkish Sign Language dataset (AU-TSL) and the second one is Montalbano Italian gesture dataset. Datasets contain background and illumination variations. Also, they are recorded with different signers. We train our models using only 400 images that we randomly selected from video frames. Our experiments show that even when we reduce the training data in half, we can still obtain satisfactory results. Proposed methods have achieved more than 98% precision using 400 frames with both datasets. Our code is available at https://github.com/au-cvml-lab/Hands-and-Face-Segmentation-With-Limited-Data.\",\"PeriodicalId\":169791,\"journal\":{\"name\":\"2019 3rd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT)\",\"volume\":\"79 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 3rd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISMSIT.2019.8932835\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 3rd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMSIT.2019.8932835","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

分割是许多分类问题的关键步骤。许多研究人员使用经典的计算机视觉方法来解决问题,最近深度学习方法在不同领域的应用越来越频繁。在本文中,我们提出了两种仅使用少量训练数据从静态图像中标记人脸和手的分割网络用于手语识别。我们的网络具有包含卷积层、最大池化层和上采样层的编码器-解码器结构;第一种是基于U-Net的网络,第二种是基于vgg的网络。我们在两个手语数据集上评估了我们的模型;第一个是我们的安卡拉大学土耳其手语数据集(AU-TSL),第二个是蒙塔尔巴诺意大利手势数据集。数据集包含背景和光照变化。此外,它们是由不同的签名者记录的。我们只使用从视频帧中随机选择的400张图像来训练我们的模型。我们的实验表明,即使我们将训练数据减少一半,我们仍然可以获得令人满意的结果。所提出的方法在两个数据集上使用400帧,精度达到98%以上。我们的代码可在https://github.com/au-cvml-lab/Hands-and-Face-Segmentation-With-Limited-Data上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Hand and Face Segmentation with Deep Convolutional Networks using Limited Labelled Data
Segmentation is a crucial step for many classification problems. There are many researchers that approach the problem using classical computer vision methods, recently deep learning approaches have been used more frequently in different domains. In this paper, we propose two segmentation networks that mark face and hands from static images for sign language recognition using only a few training data. Our networks have encoder-decoder structure that contains convolutional, max pooling and upsampling layers; the first one is a U-Net based network and the second one is a VGG-based network. We evaluate our models on two sign language datasets; the first one is our Ankara University Turkish Sign Language dataset (AU-TSL) and the second one is Montalbano Italian gesture dataset. Datasets contain background and illumination variations. Also, they are recorded with different signers. We train our models using only 400 images that we randomly selected from video frames. Our experiments show that even when we reduce the training data in half, we can still obtain satisfactory results. Proposed methods have achieved more than 98% precision using 400 frames with both datasets. Our code is available at https://github.com/au-cvml-lab/Hands-and-Face-Segmentation-With-Limited-Data.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Machine Learning Applications in Disease Surveillance Open-Source Web-Based Software for Performing Permutation Tests Graph-Based Representation of Customer Reviews for Online Stores Aynı Şartlar Altında Farklı Üretici Çekişmeli Ağların Karşılaştırılması Keratinocyte Carcinoma Detection via Convolutional Neural Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1