Pub Date : 2024-03-22DOI: 10.1007/s11257-024-09394-1
Abstract
Achieving safe collaboration between humans and robots in an industrial work-cell requires effective communication. This can be achieved through a robot perception system developed using data-driven machine learning. The challenge for human–robot communication is the availability of extensive, labelled datasets for training. Due to the variations in human behaviour and the impact of environmental conditions on the performance of perception models, models trained on standard, publicly available datasets fail to generalize well to domain and application-specific scenarios. Thus, model personalization involving the adaptation of such models to the individual humans involved in the task in the given environment would lead to better model performance. A novel framework is presented that leverages robust modes of communication and gathers feedback from the human partner to auto-label the mode with the sparse dataset. The strength of the contribution lies in using in-commensurable multimodes of inputs for personalizing models with user-specific data. The personalization through feedback-enabled human–robot communication (PF-HRCom) framework is implemented on the use of facial expression recognition as a safety feature to ensure that the human partner is engaged in the collaborative task with the robot. Additionally, PF-HRCom has been applied to a real-time human–robot handover task with a robotic manipulator. The perception module of the manipulator adapts to the user’s facial expressions and personalizes the model using feedback. Having said that, the framework is applicable to other combinations of multimodal inputs in human–robot collaboration applications.
{"title":"Personalization of industrial human–robot communication through domain adaptation based on user feedback","authors":"","doi":"10.1007/s11257-024-09394-1","DOIUrl":"https://doi.org/10.1007/s11257-024-09394-1","url":null,"abstract":"<h3>Abstract</h3> <p>Achieving safe collaboration between humans and robots in an industrial work-cell requires effective communication. This can be achieved through a robot perception system developed using data-driven machine learning. The challenge for human–robot communication is the availability of extensive, labelled datasets for training. Due to the variations in human behaviour and the impact of environmental conditions on the performance of perception models, models trained on standard, publicly available datasets fail to generalize well to domain and application-specific scenarios. Thus, model personalization involving the adaptation of such models to the individual humans involved in the task in the given environment would lead to better model performance. A novel framework is presented that leverages robust modes of communication and gathers feedback from the human partner to auto-label the mode with the sparse dataset. The strength of the contribution lies in using in-commensurable multimodes of inputs for personalizing models with user-specific data. The personalization through feedback-enabled human–robot communication (PF-HRCom) framework is implemented on the use of facial expression recognition as a safety feature to ensure that the human partner is engaged in the collaborative task with the robot. Additionally, PF-HRCom has been applied to a real-time human–robot handover task with a robotic manipulator. The perception module of the manipulator adapts to the user’s facial expressions and personalizes the model using feedback. Having said that, the framework is applicable to other combinations of multimodal inputs in human–robot collaboration applications.</p>","PeriodicalId":49388,"journal":{"name":"User Modeling and User-Adapted Interaction","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140196329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-05DOI: 10.1007/s11257-023-09390-x
Oladapo Oyebode, Darren Steeves, Rita Orji
Persuasive strategies have been widely operationalized in systems or applications to motivate behaviour change across diverse domains. However, no empirical evidence exists on whether or not persuasive strategies lead to certain emotions to inform which strategies are most appropriate for delivering interventions that not only motivate users to perform target behaviour but also help to regulate their current emotional states. We conducted a large-scale study of 660 participants to investigate if and how individuals including those at different stages of change respond emotionally to persuasive strategies and why. Specifically, we examined the relationship between perceived effectiveness of individual strategies operationalized in a system and perceived emotional states for participants at different stages of behaviour change. Our findings established relations between perceived effectiveness of strategies and emotions elicited in individuals at distinct stages of change and that the perceived emotions vary across stages of change for different reasons. For example, the reward strategy is associated with positive emotion only (i.e. happiness) for individuals across distinct stages of change because it induces feelings of personal accomplishment, provides incentives that increase the urge to achieve more goals, and offers gamified experience. Other strategies are associated with mixed emotions. Our work links emotion theory with behaviour change theories and stages of change theory to develop practical guidelines for designing personalized and emotion-adaptive persuasive systems.
{"title":"Persuasive strategies and emotional states: towards designing personalized and emotion-adaptive persuasive systems","authors":"Oladapo Oyebode, Darren Steeves, Rita Orji","doi":"10.1007/s11257-023-09390-x","DOIUrl":"https://doi.org/10.1007/s11257-023-09390-x","url":null,"abstract":"<p>Persuasive strategies have been widely operationalized in systems or applications to motivate behaviour change across diverse domains. However, no empirical evidence exists on whether or not persuasive strategies lead to certain emotions to inform which strategies are most appropriate for delivering interventions that not only motivate users to perform target behaviour but also help to regulate their current emotional states. We conducted a large-scale study of 660 participants to investigate <i>if</i> and <i>how</i> individuals including those at different stages of change respond emotionally to persuasive strategies and <i>why</i>. Specifically, we examined the relationship between perceived effectiveness of individual strategies operationalized in a system and perceived emotional states for participants at different stages of behaviour change. Our findings established relations between perceived effectiveness of strategies and emotions elicited in individuals at distinct stages of change and that the perceived emotions vary across stages of change for different reasons. For example, the <i>reward</i> strategy is associated with positive emotion only (i.e. <i>happiness</i>) for individuals across distinct stages of change because it induces feelings of personal accomplishment, provides incentives that increase the urge to achieve more goals, and offers gamified experience. Other strategies are associated with mixed emotions. Our work links emotion theory with behaviour change theories and stages of change theory to develop practical guidelines for designing personalized and emotion-adaptive persuasive systems.</p>","PeriodicalId":49388,"journal":{"name":"User Modeling and User-Adapted Interaction","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140035410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-14DOI: 10.1007/s11257-023-09387-6
Sidney K. D’Mello, Nicholas Duran, Amanda Michaels, Angela E. B. Stewart
We present CPSCoach 2.0, an automated system that provides feedback, instructional scaffolding, and practice to help individuals improve three collaborative problem-solving (CPS) skills drawn from a theoretical CPS framework: construction of shared knowledge, negotiation/coordination, and maintaining team function. CPSCoach 2.0 was developed and tested in the context of computer-mediated collaboration (video conferencing) with an educational game. It automatically analyzes users’ speech during a round of collaborative gameplay to provide personalized feedback and to select a target CPS skill for improvement. After multiple cycles of iterative testing and refinement, we tested CPSCoach 2.0 in a user study where 21 dyads (n = 42) completed four rounds of feedback and scaffolding embedded within five rounds of game-play in a single session. Using a quasi-experimental matching procedure, we found that the use of CPSCoach 2.0 was associated with improvement in CPS skill development compared to matched controls. Further, users found the automated feedback to be moderately accurate and had positive perceptions of the system, and these impressions were stronger for those who received higher scores overall. Results demonstrate the use of automated feedback and instructional scaffolds to support the development of CPS skills.
{"title":"Improving collaborative problem-solving skills via automated feedback and scaffolding: a quasi-experimental study with CPSCoach 2.0","authors":"Sidney K. D’Mello, Nicholas Duran, Amanda Michaels, Angela E. B. Stewart","doi":"10.1007/s11257-023-09387-6","DOIUrl":"https://doi.org/10.1007/s11257-023-09387-6","url":null,"abstract":"<p>We present CPSCoach 2.0, an automated system that provides feedback, instructional scaffolding, and practice to help individuals improve three collaborative problem-solving (CPS) skills drawn from a theoretical CPS framework: construction of shared knowledge, negotiation/coordination, and maintaining team function. CPSCoach 2.0 was developed and tested in the context of computer-mediated collaboration (video conferencing) with an educational game. It automatically analyzes users’ speech during a round of collaborative gameplay to provide personalized feedback and to select a target CPS skill for improvement. After multiple cycles of iterative testing and refinement, we tested CPSCoach 2.0 in a user study where 21 dyads (<i>n</i> = 42) completed four rounds of feedback and scaffolding embedded within five rounds of game-play in a single session. Using a quasi-experimental matching procedure, we found that the use of CPSCoach 2.0 was associated with improvement in CPS skill development compared to matched controls. Further, users found the automated feedback to be moderately accurate and had positive perceptions of the system, and these impressions were stronger for those who received higher scores overall. Results demonstrate the use of automated feedback and instructional scaffolds to support the development of CPS skills.</p>","PeriodicalId":49388,"journal":{"name":"User Modeling and User-Adapted Interaction","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139756909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-04DOI: 10.1007/s11257-024-09391-4
Zhiyu Chen, Zhilong Shan, Yanhua Zeng
Tracing a student’s knowledge state is critical for teaching and learning. Knowledge tracing aims to accurately predict student performance by analyzing historical records on online education platforms. Most studies have focused on a student’s skill with interactions sequence to predict the probability of correctly answering the latest question. However, they still suffer from the challenge of information sparsity and student forgetting. Specifically, the relationship between question and skill, and the features related to question texts have not been integrated to enrich information exploration. Besides, modeling forgetting behavior remains a challenge in assessing a student’s learning gains. In this paper, we present a novel model, namely Informative Representations for Forgetting-Robust Knowledge Tracing (IFKT). IFKT utilizes a light graph convolutional network to capture various relational structures via embedding propagation. Then, the embeddings are assembled with rich interaction features separately as the powerful representation. Furthermore, attention weights assignments are individualized using the relative positions, in addition to the relevance between the current question with historical interaction representations. Finally, we compare IFKT against seven knowledge tracing baselines on three real-world benchmark datasets, demonstrating the superiority of the proposed model.
{"title":"Informative representations for forgetting-robust knowledge tracing","authors":"Zhiyu Chen, Zhilong Shan, Yanhua Zeng","doi":"10.1007/s11257-024-09391-4","DOIUrl":"https://doi.org/10.1007/s11257-024-09391-4","url":null,"abstract":"<p>Tracing a student’s knowledge state is critical for teaching and learning. Knowledge tracing aims to accurately predict student performance by analyzing historical records on online education platforms. Most studies have focused on a student’s skill with interactions sequence to predict the probability of correctly answering the latest question. However, they still suffer from the challenge of information sparsity and student forgetting. Specifically, the relationship between question and skill, and the features related to question texts have not been integrated to enrich information exploration. Besides, modeling forgetting behavior remains a challenge in assessing a student’s learning gains. In this paper, we present a novel model, namely Informative Representations for Forgetting-Robust Knowledge Tracing (IFKT). IFKT utilizes a light graph convolutional network to capture various relational structures via embedding propagation. Then, the embeddings are assembled with rich interaction features separately as the powerful representation. Furthermore, attention weights assignments are individualized using the relative positions, in addition to the relevance between the current question with historical interaction representations. Finally, we compare IFKT against seven knowledge tracing baselines on three real-world benchmark datasets, demonstrating the superiority of the proposed model.</p>","PeriodicalId":49388,"journal":{"name":"User Modeling and User-Adapted Interaction","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139756690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-27DOI: 10.1007/s11257-023-09388-5
Abstract
Conversational recommendation has been a promising solution for recent recommenders to address the cold-start problem suffered by traditional recommender systems. To actively elicit users’ dynamically changing preferences, conversational recommender systems periodically query the users’ preferences on item attributes and collect conversational feedback. However, most existing conversational recommender systems only enable users to provide one type of feedback, either absolute or relative. In practice, absolute feedback can be biased and imprecise due to users’ varying rating criteria. Relative feedback, in the meanwhile, suffers from its hardship to reveal the absolute user attitudes. Hence, asking only one type of questions throughout the whole conversation may not efficiently elicit users’ preferences of high accuracy. Moreover, many existing conversational recommender systems only allow users to provide binary feedback, which can be noisy when users do not have a particular inclination. To address the above issues, we propose a generalized conversational recommendation framework, hybrid rating-comparison conversational recommender system. The system can seamlessly ask absolute and relative questions and incorporate both types of feedback with possible neutral responses. While it is promising to utilize different types of feedback together, it can be difficult to build a joint model incorporating them as they bear different interpretations of users’ preferences. To ensure relative feedback can be effectively leveraged, we first propose a bandit algorithm, RelativeConUCB. On the basis of it, we further propose a new bandit algorithm, ArcUCB, to utilize jointly absolute and relative feedback with possible neutral responses for preference elicitation. The experiments on both synthetic and real-world datasets validate the advantage of our proposed methods, in comparison with existing bandit algorithms in conversational recommender systems
{"title":"Toward joint utilization of absolute and relative bandit feedback for conversational recommendation","authors":"","doi":"10.1007/s11257-023-09388-5","DOIUrl":"https://doi.org/10.1007/s11257-023-09388-5","url":null,"abstract":"<h3>Abstract</h3> <p>Conversational recommendation has been a promising solution for recent recommenders to address the cold-start problem suffered by traditional recommender systems. To actively elicit users’ dynamically changing preferences, conversational recommender systems periodically query the users’ preferences on item attributes and collect conversational feedback. However, most existing conversational recommender systems only enable users to provide one type of feedback, either absolute or relative. In practice, absolute feedback can be biased and imprecise due to users’ varying rating criteria. Relative feedback, in the meanwhile, suffers from its hardship to reveal the absolute user attitudes. Hence, asking only one type of questions throughout the whole conversation may not efficiently elicit users’ preferences of high accuracy. Moreover, many existing conversational recommender systems only allow users to provide binary feedback, which can be noisy when users do not have a particular inclination. To address the above issues, we propose a generalized conversational recommendation framework, hybrid rating-comparison conversational recommender system. The system can seamlessly ask absolute and relative questions and incorporate both types of feedback with possible neutral responses. While it is promising to utilize different types of feedback together, it can be difficult to build a joint model incorporating them as they bear different interpretations of users’ preferences. To ensure relative feedback can be effectively leveraged, we first propose a bandit algorithm, RelativeConUCB. On the basis of it, we further propose a new bandit algorithm, <span>ArcUCB</span>, to utilize jointly absolute and relative feedback with possible neutral responses for preference elicitation. The experiments on both synthetic and real-world datasets validate the advantage of our proposed methods, in comparison with existing bandit algorithms in conversational recommender systems</p>","PeriodicalId":49388,"journal":{"name":"User Modeling and User-Adapted Interaction","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139578606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-27DOI: 10.1007/s11257-023-09389-4
Šarić-Grgić Ines, Grubišić Ani, Gašpar Angelina
The quality of an artificial intelligence-based tutoring system is its ability to observe and interpret student behaviour to infer the preferences and needs of an individual student. The student model enables a comprehensive representation of student knowledge and affects the quality of the other intelligent tutoring system’s (ITS) components. The Bayesian knowledge tracing model (BKT) is one of the first machine learning-based and widely investigated student models due to its interpretability and ability to infer student knowledge. The past Twenty-five Years have seen increasingly rapid advances in the field, so this systematic review deals with the BKT model enhancements by using the PRISMA guidelines and a unique set of criteria, including 13 aspects of enhancements and computational methods. Also, the study reveals two types of evaluation approaches found in the literature, including the prediction of student answers and the ability to estimate knowledge mastery. Overall, the most frequently investigated enhancements extended the vanilla BKT model by including student characteristics and tutor interventions. The educational context-based enhancements of domain knowledge properties, question difficulty and architectural prior knowledge were also frequently investigated enhancements. The expectation–maximization algorithm practically became the standard in estimating BKT parameters. While the enhanced BKT models generally overperformed the vanilla model in predicting the student answer by using the measures such as RMSE (root mean square error), AUC–ROC (area under curve, receiver operating characteristics curve) and accuracy, only a few studies further investigated the systems’ estimations of knowledge mastery by correlating it to knowledge on post-tests. The most frequently used educational platforms included ITSs, Massive Open Online Courses (MOOCs) and simulated environments.
{"title":"Twenty-Five Years of Bayesian knowledge tracing: a systematic review","authors":"Šarić-Grgić Ines, Grubišić Ani, Gašpar Angelina","doi":"10.1007/s11257-023-09389-4","DOIUrl":"https://doi.org/10.1007/s11257-023-09389-4","url":null,"abstract":"<p>The quality of an artificial intelligence-based tutoring system is its ability to observe and interpret student behaviour to infer the preferences and needs of an individual student. The student model enables a comprehensive representation of student knowledge and affects the quality of the other intelligent tutoring system’s (ITS) components. The Bayesian knowledge tracing model (BKT) is one of the first machine learning-based and widely investigated student models due to its interpretability and ability to infer student knowledge. The past Twenty-five Years have seen increasingly rapid advances in the field, so this systematic review deals with the BKT model enhancements by using the PRISMA guidelines and a unique set of criteria, including 13 aspects of enhancements and computational methods. Also, the study reveals two types of evaluation approaches found in the literature, including the prediction of student answers and the ability to estimate knowledge mastery. Overall, the most frequently investigated enhancements extended the vanilla BKT model by including student characteristics and tutor interventions. The educational context-based enhancements of domain knowledge properties, question difficulty and architectural prior knowledge were also frequently investigated enhancements. The expectation–maximization algorithm practically became the standard in estimating BKT parameters. While the enhanced BKT models generally overperformed the vanilla model in predicting the student answer by using the measures such as RMSE (root mean square error), AUC–ROC (area under curve, receiver operating characteristics curve) and accuracy, only a few studies further investigated the systems’ estimations of knowledge mastery by correlating it to knowledge on post-tests. The most frequently used educational platforms included ITSs, Massive Open Online Courses (MOOCs) and simulated environments.</p>","PeriodicalId":49388,"journal":{"name":"User Modeling and User-Adapted Interaction","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2024-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139578703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-18DOI: 10.1007/s11257-023-09384-9
Wei Wang, Hourieh Khalajzadeh, John Grundy, Anuradha Madugalla, Jennifer McIntosh, Humphrey O. Obie
eHealth technologies have been increasingly used to foster proactive self-management skills for patients with chronic diseases. However, it is challenging to provide each user with their desired support due to the dynamic and diverse nature of the chronic disease and its impact on users. Many such eHealth applications support aspects of “adaptive user interfaces”—interfaces that change or can be changed to accommodate the user and usage context differences. To identify the state of the art in adaptive user interfaces in the field of chronic diseases, we systematically located and analysed 48 key studies in the literature with the aim of categorising the key approaches used to date and identifying limitations, gaps, and trends in research. Our data synthesis is based on the data sources used for interface adaptation, the data collection techniques used to extract the data, the adaptive mechanisms used to process the data, and the adaptive elements generated at the interface. The findings of this review will aid researchers and developers in understanding where adaptive user interface approaches can be applied and necessary considerations for employing adaptive user interfaces to different chronic disease-related eHealth applications.
{"title":"Adaptive user interfaces in systems targeting chronic disease: a systematic literature review","authors":"Wei Wang, Hourieh Khalajzadeh, John Grundy, Anuradha Madugalla, Jennifer McIntosh, Humphrey O. Obie","doi":"10.1007/s11257-023-09384-9","DOIUrl":"https://doi.org/10.1007/s11257-023-09384-9","url":null,"abstract":"<p>eHealth technologies have been increasingly used to foster proactive self-management skills for patients with chronic diseases. However, it is challenging to provide each user with their desired support due to the dynamic and diverse nature of the chronic disease and its impact on users. Many such eHealth applications support aspects of “adaptive user interfaces”—interfaces that change or can be changed to accommodate the user and usage context differences. To identify the state of the art in adaptive user interfaces in the field of chronic diseases, we systematically located and analysed 48 key studies in the literature with the aim of categorising the key approaches used to date and identifying limitations, gaps, and trends in research. Our data synthesis is based on the data sources used for interface adaptation, the data collection techniques used to extract the data, the adaptive mechanisms used to process the data, and the adaptive elements generated at the interface. The findings of this review will aid researchers and developers in understanding where adaptive user interface approaches can be applied and necessary considerations for employing adaptive user interfaces to different chronic disease-related eHealth applications. </p>","PeriodicalId":49388,"journal":{"name":"User Modeling and User-Adapted Interaction","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138715307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1007/s11257-023-09381-y
Laila Alrajhi, Ahmed Alamri, Filipe Dwan Pereira, Alexandra I. Cristea, Elaine H. T. Oliveira
In MOOCs, identifying urgent comments on discussion forums is an ongoing challenge. Whilst urgent comments require immediate reactions from instructors, to improve interaction with their learners, and potentially reducing drop-out rates—the task is difficult, as truly urgent comments are rare. From a data analytics perspective, this represents a highly unbalanced (sparse) dataset. Here, we aim to automate the urgent comments identification process, based on fine-grained learner modelling—to be used for automatic recommendations to instructors. To showcase and compare these models, we apply them to the first gold standard dataset for Urgent iNstructor InTErvention (UNITE), which we created by labelling FutureLearn MOOC data. We implement both benchmark shallow classifiers and deep learning. Importantly, we not only compare, for the first time for the unbalanced problem, several data balancing techniques, comprising text augmentation, text augmentation with undersampling, and undersampling, but also propose several new pipelines for combining different augmenters for text augmentation. Results show that models with undersampling can predict most urgent cases; and 3X augmentation + undersampling usually attains the best performance. We additionally validate the best models via a generic benchmark dataset (Stanford). As a case study, we showcase how the naïve Bayes with count vector can adaptively support instructors in answering learner questions/comments, potentially saving time or increasing efficiency in supporting learners. Finally, we show that the errors from the classifier mirrors the disagreements between annotators. Thus, our proposed algorithms perform at least as well as a ‘super-diligent’ human instructor (with the time to consider all comments).
{"title":"Solving the imbalanced data issue: automatic urgency detection for instructor assistance in MOOC discussion forums","authors":"Laila Alrajhi, Ahmed Alamri, Filipe Dwan Pereira, Alexandra I. Cristea, Elaine H. T. Oliveira","doi":"10.1007/s11257-023-09381-y","DOIUrl":"https://doi.org/10.1007/s11257-023-09381-y","url":null,"abstract":"<p>In MOOCs, identifying urgent comments on discussion forums is an ongoing challenge. Whilst urgent comments require immediate reactions from instructors, to improve interaction with their learners, and potentially reducing drop-out rates—the task is difficult, as truly urgent comments are rare. From a data analytics perspective, this represents a <i>highly unbalanced (sparse) dataset</i>. Here, we aim to <i>automate the urgent comments identification process, based on fine-grained learner modelling</i>—to be used for automatic recommendations to instructors. To showcase and compare these models, we apply them to the <i>first gold standard dataset for </i><b><i>U</i></b><i>rgent i</i><b><i>N</i></b><i>structor </i><b><i>I</i></b><i>n</i><b><i>TE</i></b><i>rvention (UNITE)</i>, which we created by labelling FutureLearn MOOC data. We implement both benchmark shallow classifiers and deep learning. Importantly, we not only <i>compare, for the first time for the unbalanced problem, several data balancing techniques</i>, comprising text augmentation, text augmentation with undersampling, and undersampling, but also <i>propose several new pipelines for combining different augmenters for text augmentation</i>. Results show that models with undersampling can predict most urgent cases; and 3X <i>augmentation</i> + <i>undersampling</i> usually attains the best performance. We additionally validate the best models via a generic benchmark dataset (Stanford). As a case study, we showcase how the naïve Bayes with count vector can adaptively support instructors in answering learner questions/comments, potentially saving time or increasing efficiency in supporting learners. Finally, we show that the errors from the classifier mirrors the disagreements between annotators. Thus, our proposed algorithms perform at least as well as a ‘super-diligent’ human instructor (with the time to consider all comments).</p>","PeriodicalId":49388,"journal":{"name":"User Modeling and User-Adapted Interaction","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138496554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-21DOI: 10.1007/s11257-023-09385-8
Matthew Haruyama, Kazuyoshi Hidaka
Although various forms of explicit feedback such as ratings and reviews are important for recommenders, they are notoriously difficult to collect. However, beyond attributing these difficulties to user effort, we know surprisingly little about user motivations. Here, we provide a behavioral account of explicit feedback’s sparsity problem by modeling a range of constructs on the rating and review intentions of US food delivery platform users, using data collected from a structured survey (n = 796). Our model, combining the Technology Acceptance Model and Theory of Planned Behavior, revealed that standard industry practices for feedback collection appear misaligned with key psychological influences of behavioral intentions. Most notably, rating and review intentions were most influenced by subjective norms. This means that while most systems directly request feedback in user-to-provider relationships, eliciting them through social ties that manifest in user-to-user relationships is likely more effective. Secondly, our hypothesized dimensions of feedback’s perceived usefulness recorded insubstantial effect sizes on feedback intentions. These findings offered clues for practitioners to improve the connection between providing behaviors and recommendation benefits through contextualized messaging. In addition, perceived pressure and users’ high stated ability to provide feedback recorded insignificant effects, suggesting that frequent feedback requests may be ineffective. Lastly, privacy concerns recorded insignificant effects, hinting that the personalization-privacy paradox might not apply to preference information such as ratings and reviews. Our results provide a novel understanding of explicit feedback intentions to improve feedback collection in food delivery and beyond.
{"title":"What influences users to provide explicit feedback? A case of food delivery recommenders","authors":"Matthew Haruyama, Kazuyoshi Hidaka","doi":"10.1007/s11257-023-09385-8","DOIUrl":"https://doi.org/10.1007/s11257-023-09385-8","url":null,"abstract":"<p>Although various forms of explicit feedback such as ratings and reviews are important for recommenders, they are notoriously difficult to collect. However, beyond attributing these difficulties to user effort, we know surprisingly little about user motivations. Here, we provide a behavioral account of explicit feedback’s sparsity problem by modeling a range of constructs on the rating and review intentions of US food delivery platform users, using data collected from a structured survey (<i>n</i> = 796). Our model, combining the Technology Acceptance Model and Theory of Planned Behavior, revealed that standard industry practices for feedback collection appear misaligned with key psychological influences of behavioral intentions. Most notably, rating and review intentions were most influenced by subjective norms. This means that while most systems directly request feedback in <i>user-to-provider relationships</i>, eliciting them through social ties that manifest in <i>user-to-user relationships</i> is likely more effective. Secondly, our hypothesized dimensions of feedback’s perceived usefulness recorded insubstantial effect sizes on feedback intentions. These findings offered clues for practitioners to improve the connection between providing behaviors and recommendation benefits through contextualized messaging. In addition, perceived pressure and users’ high stated ability to provide feedback recorded insignificant effects, suggesting that frequent feedback requests may be ineffective. Lastly, privacy concerns recorded insignificant effects, hinting that the personalization-privacy paradox might not apply to preference information such as ratings and reviews. Our results provide a novel understanding of explicit feedback intentions to improve feedback collection in food delivery and beyond.</p>","PeriodicalId":49388,"journal":{"name":"User Modeling and User-Adapted Interaction","volume":null,"pages":null},"PeriodicalIF":3.6,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138496555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-02DOI: 10.1007/s11257-023-09386-7
Radek Pelánek
Abstract Computer-based learning environments can easily collect student response times. These can be used for multiple purposes, such as modeling student knowledge and affect, domain modeling, and cheating detection. However, to fully leverage them, it is essential to understand the properties of response times and associated caveats. In this study, we delve into the properties of response time distributions, including the influence of aberrant student behavior on response times. We then provide an overview of modeling approaches that use response times and discuss potential applications of response times for guiding the adaptive behavior of learning environments.
{"title":"Leveraging response times in learning environments: opportunities and challenges","authors":"Radek Pelánek","doi":"10.1007/s11257-023-09386-7","DOIUrl":"https://doi.org/10.1007/s11257-023-09386-7","url":null,"abstract":"Abstract Computer-based learning environments can easily collect student response times. These can be used for multiple purposes, such as modeling student knowledge and affect, domain modeling, and cheating detection. However, to fully leverage them, it is essential to understand the properties of response times and associated caveats. In this study, we delve into the properties of response time distributions, including the influence of aberrant student behavior on response times. We then provide an overview of modeling approaches that use response times and discuss potential applications of response times for guiding the adaptive behavior of learning environments.","PeriodicalId":49388,"journal":{"name":"User Modeling and User-Adapted Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135934626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}