Sign language identification and recognition: A comparative study

IF 1.1 Q3 COMPUTER SCIENCE, THEORY & METHODS Open Computer Science Pub Date : 2022-01-01 DOI:10.1515/comp-2022-0240
Ahmed A. Sultan, Walied Makram, Mohammed Kayed, Abdelmaged Amin Ali
{"title":"Sign language identification and recognition: A comparative study","authors":"Ahmed A. Sultan, Walied Makram, Mohammed Kayed, Abdelmaged Amin Ali","doi":"10.1515/comp-2022-0240","DOIUrl":null,"url":null,"abstract":"Abstract Sign Language (SL) is the main language for handicapped and disabled people. Each country has its own SL that is different from other countries. Each sign in a language is represented with variant hand gestures, body movements, and facial expressions. Researchers in this field aim to remove any obstacles that prevent the communication with deaf people by replacing all device-based techniques with vision-based techniques using Artificial Intelligence (AI) and Deep Learning. This article highlights two main SL processing tasks: Sign Language Recognition (SLR) and Sign Language Identification (SLID). The latter task is targeted to identify the signer language, while the former is aimed to translate the signer conversation into tokens (signs). The article addresses the most common datasets used in the literature for the two tasks (static and dynamic datasets that are collected from different corpora) with different contents including numerical, alphabets, words, and sentences from different SLs. It also discusses the devices required to build these datasets, as well as the different preprocessing steps applied before training and testing. The article compares the different approaches and techniques applied on these datasets. It discusses both the vision-based and the data-gloves-based approaches, aiming to analyze and focus on main methods used in vision-based approaches such as hybrid methods and deep learning algorithms. Furthermore, the article presents a graphical depiction and a tabular representation of various SLR approaches.","PeriodicalId":43014,"journal":{"name":"Open Computer Science","volume":"12 1","pages":"191 - 210"},"PeriodicalIF":1.1000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Open Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/comp-2022-0240","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 5

Abstract

Abstract Sign Language (SL) is the main language for handicapped and disabled people. Each country has its own SL that is different from other countries. Each sign in a language is represented with variant hand gestures, body movements, and facial expressions. Researchers in this field aim to remove any obstacles that prevent the communication with deaf people by replacing all device-based techniques with vision-based techniques using Artificial Intelligence (AI) and Deep Learning. This article highlights two main SL processing tasks: Sign Language Recognition (SLR) and Sign Language Identification (SLID). The latter task is targeted to identify the signer language, while the former is aimed to translate the signer conversation into tokens (signs). The article addresses the most common datasets used in the literature for the two tasks (static and dynamic datasets that are collected from different corpora) with different contents including numerical, alphabets, words, and sentences from different SLs. It also discusses the devices required to build these datasets, as well as the different preprocessing steps applied before training and testing. The article compares the different approaches and techniques applied on these datasets. It discusses both the vision-based and the data-gloves-based approaches, aiming to analyze and focus on main methods used in vision-based approaches such as hybrid methods and deep learning algorithms. Furthermore, the article presents a graphical depiction and a tabular representation of various SLR approaches.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
手语识别与识别的比较研究
摘要手语是残疾人的主要语言。每个国家都有自己不同于其他国家的SL。语言中的每个符号都用不同的手势、肢体动作和面部表情来表示。该领域的研究人员旨在通过使用人工智能(AI)和深度学习,用基于视觉的技术取代所有基于设备的技术,消除阻碍与聋人交流的任何障碍。本文重点介绍了两个主要的SL处理任务:手语识别(SLR)和手语识别(SLID)。后一项任务旨在识别签名者的语言,而前一项任务则旨在将签名者的对话转换为令牌(符号)。这篇文章介绍了文献中用于这两项任务的最常见的数据集(从不同语料库中收集的静态和动态数据集),其内容不同,包括来自不同SL的数字、字母、单词和句子。它还讨论了构建这些数据集所需的设备,以及在训练和测试之前应用的不同预处理步骤。本文比较了应用于这些数据集的不同方法和技术。它讨论了基于视觉和基于数据手套的方法,旨在分析和关注基于视觉的方法中使用的主要方法,如混合方法和深度学习算法。此外,本文还提供了各种SLR方法的图形描述和表格表示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Open Computer Science
Open Computer Science COMPUTER SCIENCE, THEORY & METHODS-
CiteScore
4.00
自引率
0.00%
发文量
24
审稿时长
25 weeks
期刊最新文献
Artificial intelligence-based public safety data resource management in smart cities Application of fingerprint image fuzzy edge recognition algorithm in criminal technology Application of SSD network algorithm in panoramic video image vehicle detection system Data preprocessing impact on machine learning algorithm performance RFID supply chain data deconstruction method based on artificial intelligence technology
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1