Aloysius Gonzaga Pradnya Sidhawara, S. Wibirama, T. B. Adji, Sri Kusrohmaniah
{"title":"Classification of Visual-Verbal Cognitive Style in Multimedia Learning using Eye-Tracking and Machine Learning","authors":"Aloysius Gonzaga Pradnya Sidhawara, S. Wibirama, T. B. Adji, Sri Kusrohmaniah","doi":"10.1109/ICST50505.2020.9732880","DOIUrl":null,"url":null,"abstract":"Multimedia learning is defined as building mental representations from words and pictures. In multimedia learning, the difference in cognitive style indicates different learning strategies. The cognitive styles of visual and verbal exert influence on behavior, preferences, and even learning outcomes. On the other hand, eye-tracking has been used to study cognitive aspects during multimedia learning. Unfortu-nately, previous studies on the identification of cognitive styles were limited to statistical descriptive analysis. The use of eye-tracking was limited merely for validation purposes. In addition, previous studies have yet to apply automatic classification of cognitive style based on eye-tracking data. Hence, this study proposes a method to automatically classify visual-verbal cogni-tive styles based on eye-tracking metrics. We implemented three shallow classifiers: K-Nearest Neighbors, Random Forest, and Support Vector Machine. Based on our experimental results, Random Forest—enhanced with two selected features from SelectKBest-gained 78% of classification accuracy. Our study has been the first investigation that reveals the possibility of implementing machine learning for automatic classification of cognitive styles based on eye-tracking data.","PeriodicalId":125807,"journal":{"name":"2020 6th International Conference on Science and Technology (ICST)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 6th International Conference on Science and Technology (ICST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICST50505.2020.9732880","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Multimedia learning is defined as building mental representations from words and pictures. In multimedia learning, the difference in cognitive style indicates different learning strategies. The cognitive styles of visual and verbal exert influence on behavior, preferences, and even learning outcomes. On the other hand, eye-tracking has been used to study cognitive aspects during multimedia learning. Unfortu-nately, previous studies on the identification of cognitive styles were limited to statistical descriptive analysis. The use of eye-tracking was limited merely for validation purposes. In addition, previous studies have yet to apply automatic classification of cognitive style based on eye-tracking data. Hence, this study proposes a method to automatically classify visual-verbal cogni-tive styles based on eye-tracking metrics. We implemented three shallow classifiers: K-Nearest Neighbors, Random Forest, and Support Vector Machine. Based on our experimental results, Random Forest—enhanced with two selected features from SelectKBest-gained 78% of classification accuracy. Our study has been the first investigation that reveals the possibility of implementing machine learning for automatic classification of cognitive styles based on eye-tracking data.