{"title":"使用分段M-H模型实现基于内容的三维手语索引","authors":"Kabil Jaballah","doi":"10.1109/ICTA.2013.6815316","DOIUrl":null,"url":null,"abstract":"3D sign language is a brand new technology that provides tools to create 3D signed content based on avatars. Pushed by the advances in computer graphics and many other advantages compared with videos of live signers, 3D sign language is getting more interest and lots of 3D signed scenes are being recorded and used for multiple purposes like young deaf teaching. In Tunsia, we created WebSign, a system that translates any textual content into any signed language through an avatar. Many similar systems have been proposed during the past few years. Unfortunately, the created contents are not cataloged efficiently and subsequently could not be retrieved in a relevant way. In this paper, we propose a new approach for automatic 3D signed contents indexing and matching. Our approach is based on classifying automatically sign language parameters (hand shape, location, orientation and movement). We also propose a new model to represent the recognized parameters based on the Mouvement-Hold Model (MHM). We implemented the designed approach and tested it on a repository of more than 2000 3D signed scenes issued from multiple systems. Results are encouraging since they reached up to 90% for the parameters recognition rate.","PeriodicalId":188977,"journal":{"name":"Fourth International Conference on Information and Communication Technology and Accessibility (ICTA)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Towards content-based 3D sign language indexing using segmental M-H model\",\"authors\":\"Kabil Jaballah\",\"doi\":\"10.1109/ICTA.2013.6815316\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"3D sign language is a brand new technology that provides tools to create 3D signed content based on avatars. Pushed by the advances in computer graphics and many other advantages compared with videos of live signers, 3D sign language is getting more interest and lots of 3D signed scenes are being recorded and used for multiple purposes like young deaf teaching. In Tunsia, we created WebSign, a system that translates any textual content into any signed language through an avatar. Many similar systems have been proposed during the past few years. Unfortunately, the created contents are not cataloged efficiently and subsequently could not be retrieved in a relevant way. In this paper, we propose a new approach for automatic 3D signed contents indexing and matching. Our approach is based on classifying automatically sign language parameters (hand shape, location, orientation and movement). We also propose a new model to represent the recognized parameters based on the Mouvement-Hold Model (MHM). We implemented the designed approach and tested it on a repository of more than 2000 3D signed scenes issued from multiple systems. Results are encouraging since they reached up to 90% for the parameters recognition rate.\",\"PeriodicalId\":188977,\"journal\":{\"name\":\"Fourth International Conference on Information and Communication Technology and Accessibility (ICTA)\",\"volume\":\"64 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Fourth International Conference on Information and Communication Technology and Accessibility (ICTA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICTA.2013.6815316\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fourth International Conference on Information and Communication Technology and Accessibility (ICTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTA.2013.6815316","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Towards content-based 3D sign language indexing using segmental M-H model
3D sign language is a brand new technology that provides tools to create 3D signed content based on avatars. Pushed by the advances in computer graphics and many other advantages compared with videos of live signers, 3D sign language is getting more interest and lots of 3D signed scenes are being recorded and used for multiple purposes like young deaf teaching. In Tunsia, we created WebSign, a system that translates any textual content into any signed language through an avatar. Many similar systems have been proposed during the past few years. Unfortunately, the created contents are not cataloged efficiently and subsequently could not be retrieved in a relevant way. In this paper, we propose a new approach for automatic 3D signed contents indexing and matching. Our approach is based on classifying automatically sign language parameters (hand shape, location, orientation and movement). We also propose a new model to represent the recognized parameters based on the Mouvement-Hold Model (MHM). We implemented the designed approach and tested it on a repository of more than 2000 3D signed scenes issued from multiple systems. Results are encouraging since they reached up to 90% for the parameters recognition rate.