{"title":"The Semi-Automatic Approach to Extract the Features of Human Facial Region","authors":"Akın Öztopuz, B. Karasulu","doi":"10.1109/ISMSIT.2019.8932890","DOIUrl":null,"url":null,"abstract":"The segmantation of the human facial region from a complex background is the basis for the success of today's applications such as facial recognition, expression extraction, surveillance systems, and the guilty people finding. Finding the face region and then extracting attributes that represent the face is another problematic process that needs to be overcome. In general, as is well done in the literature, using the Viola-Jones method or functions in libraries such as D-lib, the above-mentioned operations can be performed fully automatically. This means that the methods applied automatically to use reference data (e.g., XML data format for Viola-Jones) or detectors (D-lib landmark detection) to find keypoints independent of the given object as input. In this study, it is aimed to extract the face region from the image containing the frontal human face with semi-automatic approaches and to mark the area with eye and nose keypoints on the obtained area. Human face contour and face geometry information are used in face positioning. The eye map (i.e., EyeMap) algorithm was used for eye keypoint extraction, while facial geometry, morphological operations and computer vision library OpenCV template matching functions were used for the nasal region. As a result, the main purpose of this study is to obtain the facial region via ensuring the appropriate features with our semi-automatic approach instead of extracting automatically by using known libraries or mostly by machine learning methods. In addition, some discussion and conclusion are involved by our study as well.","PeriodicalId":169791,"journal":{"name":"2019 3rd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 3rd International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMSIT.2019.8932890","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The segmantation of the human facial region from a complex background is the basis for the success of today's applications such as facial recognition, expression extraction, surveillance systems, and the guilty people finding. Finding the face region and then extracting attributes that represent the face is another problematic process that needs to be overcome. In general, as is well done in the literature, using the Viola-Jones method or functions in libraries such as D-lib, the above-mentioned operations can be performed fully automatically. This means that the methods applied automatically to use reference data (e.g., XML data format for Viola-Jones) or detectors (D-lib landmark detection) to find keypoints independent of the given object as input. In this study, it is aimed to extract the face region from the image containing the frontal human face with semi-automatic approaches and to mark the area with eye and nose keypoints on the obtained area. Human face contour and face geometry information are used in face positioning. The eye map (i.e., EyeMap) algorithm was used for eye keypoint extraction, while facial geometry, morphological operations and computer vision library OpenCV template matching functions were used for the nasal region. As a result, the main purpose of this study is to obtain the facial region via ensuring the appropriate features with our semi-automatic approach instead of extracting automatically by using known libraries or mostly by machine learning methods. In addition, some discussion and conclusion are involved by our study as well.