Jie Gao, K. T. W. Choo, Junming Cao, R. Lee, Simon T. Perrault
While AI-assisted individual qualitative analysis has been substantially studied, AI-assisted collaborative qualitative analysis (CQA) – a process that involves multiple researchers working together to interpret data – remains relatively unexplored. After identifying CQA practices and design opportunities through formative interviews, we designed and implemented CoAIcoder, a tool leveraging AI to enhance human-to-human collaboration within CQA through four distinct collaboration methods. With a between-subject design, we evaluated CoAIcoder with 32 pairs of CQA-trained participants across common CQA phases under each collaboration method. Our findings suggest that while using a shared AI model as a mediator among coders could improve CQA efficiency and foster agreement more quickly in the early coding stage, it might affect the final code diversity. We also emphasize the need to consider the independence level when using AI to assist human-to-human collaboration in various CQA scenarios. Lastly, we suggest design implications for future AI-assisted CQA systems.
{"title":"CoAIcoder: Examining the Effectiveness of AI-assisted Human-to-Human Collaboration in Qualitative Analysis","authors":"Jie Gao, K. T. W. Choo, Junming Cao, R. Lee, Simon T. Perrault","doi":"10.1145/3617362","DOIUrl":"https://doi.org/10.1145/3617362","url":null,"abstract":"While AI-assisted individual qualitative analysis has been substantially studied, AI-assisted collaborative qualitative analysis (CQA) – a process that involves multiple researchers working together to interpret data – remains relatively unexplored. After identifying CQA practices and design opportunities through formative interviews, we designed and implemented CoAIcoder, a tool leveraging AI to enhance human-to-human collaboration within CQA through four distinct collaboration methods. With a between-subject design, we evaluated CoAIcoder with 32 pairs of CQA-trained participants across common CQA phases under each collaboration method. Our findings suggest that while using a shared AI model as a mediator among coders could improve CQA efficiency and foster agreement more quickly in the early coding stage, it might affect the final code diversity. We also emphasize the need to consider the independence level when using AI to assist human-to-human collaboration in various CQA scenarios. Lastly, we suggest design implications for future AI-assisted CQA systems.","PeriodicalId":50917,"journal":{"name":"ACM Transactions on Computer-Human Interaction","volume":" ","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47886260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianyu Ren, Dengfeng Yao, Chaoran Yang, Xinchen Kang
Chinese Sign Language (CSL) and Chinese are languages used in the Chinese mainland. As a dominant language, Chinese has great influence on all levels of CSL. CSL, as a visual sign language, is fundamentally different from Chinese in linguistic structure. Unlike English, Chinese, as a pictograph, has influence on Chinese and CSL. This study explains in detail the influence of Chinese characters on CSL at the lexical level, including many elements from Chinese, such as "仿字fangzi" (form imitating Chinese characters), "书空shukong" (writing in the air with the index finger), loan translation, finger spelling, and mouthing patterns. This influence is not a simple borrowing of Chinese characters, but a creative imitation and adaptation according to the needs of sign language to express meaning. After a long period of evolution, the characteristics of Chinese characters are naturally integrated into CSL loanwords, which makes the relationship between sign language and Chinese characters closer. CSL borrows a large number of Chinese words, most of which are signs to express non-core concepts. These borrowed signs are indispensable part of CSL sign language family, enriches sign language vocabulary, improves the accuracy of sign language expression, and plays a positive role in promoting the learning, work, and lives of deaf people.
{"title":"The Influence of Chinese Characters on Chinese Sign Language","authors":"Tianyu Ren, Dengfeng Yao, Chaoran Yang, Xinchen Kang","doi":"10.1145/3591465","DOIUrl":"https://doi.org/10.1145/3591465","url":null,"abstract":"Chinese Sign Language (CSL) and Chinese are languages used in the Chinese mainland. As a dominant language, Chinese has great influence on all levels of CSL. CSL, as a visual sign language, is fundamentally different from Chinese in linguistic structure. Unlike English, Chinese, as a pictograph, has influence on Chinese and CSL. This study explains in detail the influence of Chinese characters on CSL at the lexical level, including many elements from Chinese, such as \"仿字fangzi\" (form imitating Chinese characters), \"书空shukong\" (writing in the air with the index finger), loan translation, finger spelling, and mouthing patterns. This influence is not a simple borrowing of Chinese characters, but a creative imitation and adaptation according to the needs of sign language to express meaning. After a long period of evolution, the characteristics of Chinese characters are naturally integrated into CSL loanwords, which makes the relationship between sign language and Chinese characters closer. CSL borrows a large number of Chinese words, most of which are signs to express non-core concepts. These borrowed signs are indispensable part of CSL sign language family, enriches sign language vocabulary, improves the accuracy of sign language expression, and plays a positive role in promoting the learning, work, and lives of deaf people.","PeriodicalId":50917,"journal":{"name":"ACM Transactions on Computer-Human Interaction","volume":"1 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41588021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philip Engelbutzeder, David Randell, M. Landwehr, Konstantin Aal, G. Stevens, V. Wulf
Food practices have become an important context for questions around sustainability. Within HCI, Sustainable HCI and Human-Food-Interaction have developed as a response. We argue, nevertheless, that food practices as a social activity remain relatively under-examined and further that sustainable food practices hinge on communal activity. We present the results of action-oriented research with a grassroots movement committed to sustainable food practices at a local, communal level, thereby demonstrating the role of ICT in making food resource sharing a viable practice. We suggest that the current focus on food sharing might usefully be supplemented by attention to food resource sharing, an approach that aligns with a paradigm shift from surplus to abundance. We argue for design that aims to encourage food resource sharing at a local level but that also has wider ramifications. These ‘glocal’ endeavors recognize the complexity of prosumption practices and foster aspirations for ‘deep change’ in food systems.
{"title":"From surplus and scarcity towards abundance: Understanding the use of ICT in food resource sharing practices","authors":"Philip Engelbutzeder, David Randell, M. Landwehr, Konstantin Aal, G. Stevens, V. Wulf","doi":"10.1145/3589957","DOIUrl":"https://doi.org/10.1145/3589957","url":null,"abstract":"Food practices have become an important context for questions around sustainability. Within HCI, Sustainable HCI and Human-Food-Interaction have developed as a response. We argue, nevertheless, that food practices as a social activity remain relatively under-examined and further that sustainable food practices hinge on communal activity. We present the results of action-oriented research with a grassroots movement committed to sustainable food practices at a local, communal level, thereby demonstrating the role of ICT in making food resource sharing a viable practice. We suggest that the current focus on food sharing might usefully be supplemented by attention to food resource sharing, an approach that aligns with a paradigm shift from surplus to abundance. We argue for design that aims to encourage food resource sharing at a local level but that also has wider ramifications. These ‘glocal’ endeavors recognize the complexity of prosumption practices and foster aspirations for ‘deep change’ in food systems.","PeriodicalId":50917,"journal":{"name":"ACM Transactions on Computer-Human Interaction","volume":" ","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49539589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Prather, B. Reeves, Paul Denny, Brett A. Becker, Juho Leinonen, Andrew Luxton-Reilly, Garrett B. Powell, James Finnie-Ansley, E. Santos
Recent developments in deep learning have resulted in code-generation models that produce source code from natural language and code-based prompts with high accuracy. This is likely to have profound effects in the classroom, where novices learning to code can now use free tools to automatically suggest solutions to programming exercises and assignments. However, little is currently known about how novices interact with these tools in practice. We present the first study that observes students at the introductory level using one such code auto-generating tool, Github Copilot, on a typical introductory programming (CS1) assignment. Through observations and interviews we explore student perceptions of the benefits and pitfalls of this technology for learning, present new observed interaction patterns, and discuss cognitive and metacognitive difficulties faced by students. We consider design implications of these findings, specifically in terms of how tools like Copilot can better support and scaffold the novice programming experience.
{"title":"“It’s Weird That it Knows What I Want”: Usability and Interactions with Copilot for Novice Programmers","authors":"J. Prather, B. Reeves, Paul Denny, Brett A. Becker, Juho Leinonen, Andrew Luxton-Reilly, Garrett B. Powell, James Finnie-Ansley, E. Santos","doi":"10.1145/3617367","DOIUrl":"https://doi.org/10.1145/3617367","url":null,"abstract":"Recent developments in deep learning have resulted in code-generation models that produce source code from natural language and code-based prompts with high accuracy. This is likely to have profound effects in the classroom, where novices learning to code can now use free tools to automatically suggest solutions to programming exercises and assignments. However, little is currently known about how novices interact with these tools in practice. We present the first study that observes students at the introductory level using one such code auto-generating tool, Github Copilot, on a typical introductory programming (CS1) assignment. Through observations and interviews we explore student perceptions of the benefits and pitfalls of this technology for learning, present new observed interaction patterns, and discuss cognitive and metacognitive difficulties faced by students. We consider design implications of these findings, specifically in terms of how tools like Copilot can better support and scaffold the novice programming experience.","PeriodicalId":50917,"journal":{"name":"ACM Transactions on Computer-Human Interaction","volume":" ","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46390230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhuying Li, Yan Wang, Josh Andrés, N. Semertzidis, S. Greuter, F. Mueller
Ingestible sensors have become smaller and more powerful and allow us to envisage new human-computer interactions and bodily play experiences inside our bodies. Users can swallow ingestible sensors, which facilitate interior body sensing functions that provide data on which play experiences can be built. We call bodily play that uses ingestible sensors as play technologies “ingestible play”, and we have adopted a research-through-design (RtD) approach to investigate three prototypes. For each prototype, we conducted a field study to understand the player experiences. Based upon these results and practical design experiences, we have developed a design framework for ingestible play. We hope this work can guide the future design of ingestible play; inspire the design of play technologies inside the human body to expand the current bodily play design space; and ultimately extend our understanding of how to design for the human body by considering the bodily experience of one’s interior body.
{"title":"A Design Framework for Ingestible Play","authors":"Zhuying Li, Yan Wang, Josh Andrés, N. Semertzidis, S. Greuter, F. Mueller","doi":"10.1145/3589954","DOIUrl":"https://doi.org/10.1145/3589954","url":null,"abstract":"Ingestible sensors have become smaller and more powerful and allow us to envisage new human-computer interactions and bodily play experiences inside our bodies. Users can swallow ingestible sensors, which facilitate interior body sensing functions that provide data on which play experiences can be built. We call bodily play that uses ingestible sensors as play technologies “ingestible play”, and we have adopted a research-through-design (RtD) approach to investigate three prototypes. For each prototype, we conducted a field study to understand the player experiences. Based upon these results and practical design experiences, we have developed a design framework for ingestible play. We hope this work can guide the future design of ingestible play; inspire the design of play technologies inside the human body to expand the current bodily play design space; and ultimately extend our understanding of how to design for the human body by considering the bodily experience of one’s interior body.","PeriodicalId":50917,"journal":{"name":"ACM Transactions on Computer-Human Interaction","volume":" ","pages":"1 - 39"},"PeriodicalIF":3.7,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49365625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aske Mottelson, A. Muresan, K. Hornbæk, G. Makransky
Body ownership illusions (BOIs) occur when participants experience that their actual body is replaced by a body shown in virtual reality (VR). Based on a systematic review of the cumulative evidence on BOIs from 111 research papers published in 2010 to 2021, this article summarizes the findings of empirical studies of BOIs. Following the PRISMA guidelines, the review points to diverse experimental practices for inducing and measuring body ownership. The two major components of embodiment measurement, body ownership and agency, are examined. The embodiment of virtual avatars generally leads to modest body ownership and slightly higher agency. We also find that BOI research lacks statistical power and standardization across tasks, measurement instruments, and analysis approaches. Furthermore, the reviewed studies showed a lack of clarity in fundamental terminology, constructs, and theoretical underpinnings. These issues restrict scientific advances on the major components of BOIs, and together impede scientific rigor and theory-building.
{"title":"A Systematic Review and Meta-analysis of the Effectiveness of Body Ownership Illusions in Virtual Reality","authors":"Aske Mottelson, A. Muresan, K. Hornbæk, G. Makransky","doi":"10.1145/3590767","DOIUrl":"https://doi.org/10.1145/3590767","url":null,"abstract":"Body ownership illusions (BOIs) occur when participants experience that their actual body is replaced by a body shown in virtual reality (VR). Based on a systematic review of the cumulative evidence on BOIs from 111 research papers published in 2010 to 2021, this article summarizes the findings of empirical studies of BOIs. Following the PRISMA guidelines, the review points to diverse experimental practices for inducing and measuring body ownership. The two major components of embodiment measurement, body ownership and agency, are examined. The embodiment of virtual avatars generally leads to modest body ownership and slightly higher agency. We also find that BOI research lacks statistical power and standardization across tasks, measurement instruments, and analysis approaches. Furthermore, the reviewed studies showed a lack of clarity in fundamental terminology, constructs, and theoretical underpinnings. These issues restrict scientific advances on the major components of BOIs, and together impede scientific rigor and theory-building.","PeriodicalId":50917,"journal":{"name":"ACM Transactions on Computer-Human Interaction","volume":" ","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48054121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yue You, Chun-Hua Tsai, Yao Li, Fenglong Ma, Christopher Heron, Xinning Gui
Chatbot-based symptom checker (CSC) apps have become increasingly popular in healthcare. These apps engage users in human-like conversations and offer possible medical diagnoses. The conversational design of these apps can significantly impact user perceptions and experiences, and may influence medical decisions users make and the medical care they receive. However, the effects of the conversational design of CSCs remain understudied, and there is a need to investigate and enhance users’ interactions with CSCs. In this article, we conducted a two-stage exploratory study using a human-centered design methodology. We first conducted a qualitative interview study to identify key user needs in engaging with CSCs. We then performed an experimental study to investigate potential CSC conversational design solutions based on the results from the interview study. We identified that emotional support, explanations of medical information, and efficiency were important factors for users in their interactions with CSCs. We also demonstrated that emotional support and explanations could affect user perceptions and experiences, and they are context-dependent. Based on these findings, we offer design implications for CSC conversations to improve the user experience and health-related decision-making.
{"title":"Beyond Self-diagnosis: How a Chatbot-based Symptom Checker Should Respond","authors":"Yue You, Chun-Hua Tsai, Yao Li, Fenglong Ma, Christopher Heron, Xinning Gui","doi":"10.1145/3589959","DOIUrl":"https://doi.org/10.1145/3589959","url":null,"abstract":"Chatbot-based symptom checker (CSC) apps have become increasingly popular in healthcare. These apps engage users in human-like conversations and offer possible medical diagnoses. The conversational design of these apps can significantly impact user perceptions and experiences, and may influence medical decisions users make and the medical care they receive. However, the effects of the conversational design of CSCs remain understudied, and there is a need to investigate and enhance users’ interactions with CSCs. In this article, we conducted a two-stage exploratory study using a human-centered design methodology. We first conducted a qualitative interview study to identify key user needs in engaging with CSCs. We then performed an experimental study to investigate potential CSC conversational design solutions based on the results from the interview study. We identified that emotional support, explanations of medical information, and efficiency were important factors for users in their interactions with CSCs. We also demonstrated that emotional support and explanations could affect user perceptions and experiences, and they are context-dependent. Based on these findings, we offer design implications for CSC conversations to improve the user experience and health-related decision-making.","PeriodicalId":50917,"journal":{"name":"ACM Transactions on Computer-Human Interaction","volume":"30 1","pages":"1 - 44"},"PeriodicalIF":3.7,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42835655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Now that the protections of Roe v. Wade are no longer available throughout the United States, the free flow of personal data can be used by legal authorities to provide evidence of felony. However, we know little about how impacted individuals approach their reproductive privacy in this new landscape. We conducted interviews with 15 individuals who may get/were pregnant to address this gap. While nearly all reported deleting period tracking apps, they were not willing to go much further, even while acknowledging the risks of generating data. Quite a few considered a more inhospitable, Handmaid's Tale like climate in which their medical history and movements would put them in legal peril but felt that, by definition, this reality was insuperable, and also that they were not the target—the notion that privileged location, stage of life did not make them the focus of government or vigilante efforts. We also found that certain individuals (often younger and/or with reproductive risks) were more attuned to the need to modify their technology or equipped to employ high and low-tech strategies. Using an intersectional lens, we discuss implications for media advocacy and propose privacy intermediation to frame our thinking about reproductive privacy.
{"title":"“I Did Watch ‘The Handmaid's Tale’”: Threat Modeling Privacy Post-roe in the United States","authors":"Nora Mcdonald, Nazanin Andalibi","doi":"10.1145/3589960","DOIUrl":"https://doi.org/10.1145/3589960","url":null,"abstract":"Now that the protections of Roe v. Wade are no longer available throughout the United States, the free flow of personal data can be used by legal authorities to provide evidence of felony. However, we know little about how impacted individuals approach their reproductive privacy in this new landscape. We conducted interviews with 15 individuals who may get/were pregnant to address this gap. While nearly all reported deleting period tracking apps, they were not willing to go much further, even while acknowledging the risks of generating data. Quite a few considered a more inhospitable, Handmaid's Tale like climate in which their medical history and movements would put them in legal peril but felt that, by definition, this reality was insuperable, and also that they were not the target—the notion that privileged location, stage of life did not make them the focus of government or vigilante efforts. We also found that certain individuals (often younger and/or with reproductive risks) were more attuned to the need to modify their technology or equipped to employ high and low-tech strategies. Using an intersectional lens, we discuss implications for media advocacy and propose privacy intermediation to frame our thinking about reproductive privacy.","PeriodicalId":50917,"journal":{"name":"ACM Transactions on Computer-Human Interaction","volume":"30 1","pages":"1 - 34"},"PeriodicalIF":3.7,"publicationDate":"2023-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47023049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hybridity in immersive technologies has not been studied for factors that are likely to influence engagement. A noticeable factor is the spatial enclosure that defines where users meet. This involves a mutual object of interest, contents that the users may generate around the object, and the proximity between users. This study examines these factors, namely how object interactivity, user-generated contents (UGC) and avatar proximity influence engagement. We designed a Hybrid Virtual and Augmented Reality (HVAR) environment that supports paired users to experience cultural heritage in both Virtual Reality (VR) and Augmented Reality (AR). A user study was conducted with 60 participants, providing assessments of engagement and presence via questionnaires, together with mobile electroencephalogram (mEEG) and user activity data that measures VR user engagement in real-time. Our findings provide insights into how engagement between users can occur in HVAR environments for the future hybrid reality with multi-device connectivity.
{"title":"Factors Influencing Engagement in Hybrid Virtual and Augmented Reality","authors":"Yue Li, Eugene Ch’ng, S. Cobb","doi":"10.1145/3589952","DOIUrl":"https://doi.org/10.1145/3589952","url":null,"abstract":"Hybridity in immersive technologies has not been studied for factors that are likely to influence engagement. A noticeable factor is the spatial enclosure that defines where users meet. This involves a mutual object of interest, contents that the users may generate around the object, and the proximity between users. This study examines these factors, namely how object interactivity, user-generated contents (UGC) and avatar proximity influence engagement. We designed a Hybrid Virtual and Augmented Reality (HVAR) environment that supports paired users to experience cultural heritage in both Virtual Reality (VR) and Augmented Reality (AR). A user study was conducted with 60 participants, providing assessments of engagement and presence via questionnaires, together with mobile electroencephalogram (mEEG) and user activity data that measures VR user engagement in real-time. Our findings provide insights into how engagement between users can occur in HVAR environments for the future hybrid reality with multi-device connectivity.","PeriodicalId":50917,"journal":{"name":"ACM Transactions on Computer-Human Interaction","volume":" ","pages":"1 - 27"},"PeriodicalIF":3.7,"publicationDate":"2023-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48381614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roberto Martínez Maldonado, Vanessa Echeverría, Gloria Fernández-Nieto, Lixiang Yan, Linxuan Zhao, Riordan Alfredo, Xinyu Li, S. Dix, Hollie Jaggard, Rosie Wotherspoon, Abra Osborne, D. Gašević, S. B. Shum
Multimodal Learning Analytics (MMLA) innovations make use of rapidly evolving sensing and artificial intelligence algorithms to collect rich data about learning activities that unfold in physical spaces. The analysis of these data is opening exciting new avenues for both studying and supporting learning. Yet, practical and logistical challenges commonly appear while deploying MMLA innovations ”in-the-wild”. These can span from technical issues related to enhancing the learning space with sensing capabilities, to the increased complexity of teachers’ tasks. These practicalities have been rarely investigated. This paper addresses this gap by presenting a set of lessons learnt from a 2-year human-centred MMLA in-the-wild study conducted with 399 students and 17 educators in the context of nursing education. The lessons learnt were synthesised into topics related to i) technological/physical aspects of the deployment; ii) multimodal data and interfaces; iii) the design process; iv) participation, ethics and privacy; and v) sustainability of the deployment.
{"title":"Lessons Learnt from a Multimodal Learning Analytics Deployment In-the-wild","authors":"Roberto Martínez Maldonado, Vanessa Echeverría, Gloria Fernández-Nieto, Lixiang Yan, Linxuan Zhao, Riordan Alfredo, Xinyu Li, S. Dix, Hollie Jaggard, Rosie Wotherspoon, Abra Osborne, D. Gašević, S. B. Shum","doi":"10.1145/3622784","DOIUrl":"https://doi.org/10.1145/3622784","url":null,"abstract":"Multimodal Learning Analytics (MMLA) innovations make use of rapidly evolving sensing and artificial intelligence algorithms to collect rich data about learning activities that unfold in physical spaces. The analysis of these data is opening exciting new avenues for both studying and supporting learning. Yet, practical and logistical challenges commonly appear while deploying MMLA innovations ”in-the-wild”. These can span from technical issues related to enhancing the learning space with sensing capabilities, to the increased complexity of teachers’ tasks. These practicalities have been rarely investigated. This paper addresses this gap by presenting a set of lessons learnt from a 2-year human-centred MMLA in-the-wild study conducted with 399 students and 17 educators in the context of nursing education. The lessons learnt were synthesised into topics related to i) technological/physical aspects of the deployment; ii) multimodal data and interfaces; iii) the design process; iv) participation, ethics and privacy; and v) sustainability of the deployment.","PeriodicalId":50917,"journal":{"name":"ACM Transactions on Computer-Human Interaction","volume":" ","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46122945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}