{"title":"Edge Computing Enables Assessment of Student Community Building: An Emotion Recognition Method Based on TinyML","authors":"Shuo Liu","doi":"10.1002/itl2.645","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Deep network-based video sentiment analysis is crucial for online evaluation tasks. However, these deep models are difficult to run on intelligent edge devices with limited computing resources. In addition, video data are susceptible to lighting interference, distortion, and background noise, which severely limits the performance of facial expression recognition. To relieve these issues, we develop an effective multi-scale semantic fusion tiny machine learning (TinyML) model based on a spatiotemporal graph convolutional network (ST-GCN) which enables robust expression recognition from facial landmark sequences. Specifically, we construct regional-connected graph data based on facial landmarks which are collected from cameras on different mobile devices. In existing spatiotemporal graph convolutional networks, we leverage the multi-scale semantic fusion mechanism to mine the hierarchical structure of facial landmarks. The experimental results on CK+ and online student community assessment sentiment analysis (OSCASA) dataset confirm that our approach yields comparable results.</p>\n </div>","PeriodicalId":100725,"journal":{"name":"Internet Technology Letters","volume":"8 2","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet Technology Letters","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/itl2.645","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Deep network-based video sentiment analysis is crucial for online evaluation tasks. However, these deep models are difficult to run on intelligent edge devices with limited computing resources. In addition, video data are susceptible to lighting interference, distortion, and background noise, which severely limits the performance of facial expression recognition. To relieve these issues, we develop an effective multi-scale semantic fusion tiny machine learning (TinyML) model based on a spatiotemporal graph convolutional network (ST-GCN) which enables robust expression recognition from facial landmark sequences. Specifically, we construct regional-connected graph data based on facial landmarks which are collected from cameras on different mobile devices. In existing spatiotemporal graph convolutional networks, we leverage the multi-scale semantic fusion mechanism to mine the hierarchical structure of facial landmarks. The experimental results on CK+ and online student community assessment sentiment analysis (OSCASA) dataset confirm that our approach yields comparable results.