Fatimah Albargi, Naima Khan, Indrajeet Ghosh, Ahana Roy
{"title":"BeautyNet:一个使用腕带传感器的化妆活动识别框架","authors":"Fatimah Albargi, Naima Khan, Indrajeet Ghosh, Ahana Roy","doi":"10.1109/SMARTCOMP58114.2023.00072","DOIUrl":null,"url":null,"abstract":"The significance of enhancing facial features has grown increasingly in popularity among all groups of people bringing a surge in makeup activities. The makeup market is one of the most profitable and founding sectors in the fashion industry which involves product retailing and demands user training. Makeup activities imply exceptionally delicate hand movements and require much training and practice for perfection. However, the only available choices in learning makeup activities are hands-on workshops by professional instructors or, at most, video-based visual instructions. None of these exhibits immense benefits to beginners, or visually impaired people. One can consistently watch and listen to the best of their abilities, but to precisely practice, perform, and reach makeup satisfaction, recognition from an IoT (Internet-of-Things) device with results and feedback would be the utmost support. In this work, we propose a makeup activity recognition framework, BeautyNet which detects different makeup activities from wrist-worn sensor data collected from ten participants of different age groups in two experimental setups. Our framework employs a LSTM-autoencoder based classifier to extract features from the sensor data and classifies five makeup activities (i.e., applying cream, lipsticks, blusher, eyeshadow, and mascara) in controlled and uncontrolled environment. Empirical results indicate that BeautyNet achieves 95% and 93% accuracy for makeup activity detection in controlled and uncontrolled settings, respectively. In addition, we evaluate BeautyNet with various traditional machine learning algorithms using our in-house dataset and noted an increase in accuracy by ≈ 4-7%.","PeriodicalId":163556,"journal":{"name":"2023 IEEE International Conference on Smart Computing (SMARTCOMP)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"BeautyNet: A Makeup Activity Recognition Framework using Wrist-worn Sensor\",\"authors\":\"Fatimah Albargi, Naima Khan, Indrajeet Ghosh, Ahana Roy\",\"doi\":\"10.1109/SMARTCOMP58114.2023.00072\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The significance of enhancing facial features has grown increasingly in popularity among all groups of people bringing a surge in makeup activities. The makeup market is one of the most profitable and founding sectors in the fashion industry which involves product retailing and demands user training. Makeup activities imply exceptionally delicate hand movements and require much training and practice for perfection. However, the only available choices in learning makeup activities are hands-on workshops by professional instructors or, at most, video-based visual instructions. None of these exhibits immense benefits to beginners, or visually impaired people. One can consistently watch and listen to the best of their abilities, but to precisely practice, perform, and reach makeup satisfaction, recognition from an IoT (Internet-of-Things) device with results and feedback would be the utmost support. In this work, we propose a makeup activity recognition framework, BeautyNet which detects different makeup activities from wrist-worn sensor data collected from ten participants of different age groups in two experimental setups. Our framework employs a LSTM-autoencoder based classifier to extract features from the sensor data and classifies five makeup activities (i.e., applying cream, lipsticks, blusher, eyeshadow, and mascara) in controlled and uncontrolled environment. Empirical results indicate that BeautyNet achieves 95% and 93% accuracy for makeup activity detection in controlled and uncontrolled settings, respectively. In addition, we evaluate BeautyNet with various traditional machine learning algorithms using our in-house dataset and noted an increase in accuracy by ≈ 4-7%.\",\"PeriodicalId\":163556,\"journal\":{\"name\":\"2023 IEEE International Conference on Smart Computing (SMARTCOMP)\",\"volume\":\"3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE International Conference on Smart Computing (SMARTCOMP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SMARTCOMP58114.2023.00072\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE International Conference on Smart Computing (SMARTCOMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SMARTCOMP58114.2023.00072","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
BeautyNet: A Makeup Activity Recognition Framework using Wrist-worn Sensor
The significance of enhancing facial features has grown increasingly in popularity among all groups of people bringing a surge in makeup activities. The makeup market is one of the most profitable and founding sectors in the fashion industry which involves product retailing and demands user training. Makeup activities imply exceptionally delicate hand movements and require much training and practice for perfection. However, the only available choices in learning makeup activities are hands-on workshops by professional instructors or, at most, video-based visual instructions. None of these exhibits immense benefits to beginners, or visually impaired people. One can consistently watch and listen to the best of their abilities, but to precisely practice, perform, and reach makeup satisfaction, recognition from an IoT (Internet-of-Things) device with results and feedback would be the utmost support. In this work, we propose a makeup activity recognition framework, BeautyNet which detects different makeup activities from wrist-worn sensor data collected from ten participants of different age groups in two experimental setups. Our framework employs a LSTM-autoencoder based classifier to extract features from the sensor data and classifies five makeup activities (i.e., applying cream, lipsticks, blusher, eyeshadow, and mascara) in controlled and uncontrolled environment. Empirical results indicate that BeautyNet achieves 95% and 93% accuracy for makeup activity detection in controlled and uncontrolled settings, respectively. In addition, we evaluate BeautyNet with various traditional machine learning algorithms using our in-house dataset and noted an increase in accuracy by ≈ 4-7%.