首页 > 最新文献

Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems最新文献

英文 中文
Learning and Practicing Logic Circuits: Development of a Mobile-based Learning Prototype 学习和练习逻辑电路:基于移动的学习原型的开发
M. Seraj
Nowadays, with the advent of electronic devices in everyday life, mobile devices can be utilized for learning purposes. When designing a mobile-based learning application, a large number of aspects should be taken into account. For the present paper, the following aspects are of special importance: first, it should be considered how to represent information; second, possible interactions between learner and system should be defined; third – and depending on the second aspect – it should be considered how real-time responses can be provided by the system. Moreover, psychological theories as for example the 4C/ID model and findings with respect to blended learning environments should be taken into account. In this paper, a mobile-based learning prototype concerning the learning topic ”logic circuit design” is presented which considers the mentioned aspects to support independent practice. The prototype includes four different representations: (i) code-based (Verilog hardware description language), (ii) graphical-based (gate-level view), (iii) Boolean function, and (iv) truth table for each gate. The proposed learning system divides the learning content into different sections to support independent practice in meaningful steps. Multiple representations are included in order to foster understanding and transfer. The resulting implications for future work are discussed.
如今,随着电子设备在日常生活中的出现,移动设备可以用于学习目的。在设计一个基于移动的学习应用程序时,需要考虑很多方面。对于本文来说,以下几个方面尤为重要:一是要考虑如何表示信息;第二,定义学习者和系统之间可能的交互;第三——取决于第二个方面——应该考虑系统如何提供实时响应。此外,应该考虑心理学理论,例如4C/ID模型和关于混合学习环境的研究结果。本文针对“逻辑电路设计”这一学习主题,提出了一种基于移动的学习原型,该原型考虑了上述方面,以支持自主实践。原型包括四种不同的表示:(i)基于代码的(Verilog硬件描述语言),(ii)基于图形的(门级视图),(iii)布尔函数,(iv)每个门的真值表。提出的学习系统将学习内容分成不同的部分,以支持在有意义的步骤中独立练习。为了促进理解和转移,包括多种表示。讨论了结果对未来工作的影响。
{"title":"Learning and Practicing Logic Circuits: Development of a Mobile-based Learning Prototype","authors":"M. Seraj","doi":"10.1145/3411763.3451720","DOIUrl":"https://doi.org/10.1145/3411763.3451720","url":null,"abstract":"Nowadays, with the advent of electronic devices in everyday life, mobile devices can be utilized for learning purposes. When designing a mobile-based learning application, a large number of aspects should be taken into account. For the present paper, the following aspects are of special importance: first, it should be considered how to represent information; second, possible interactions between learner and system should be defined; third – and depending on the second aspect – it should be considered how real-time responses can be provided by the system. Moreover, psychological theories as for example the 4C/ID model and findings with respect to blended learning environments should be taken into account. In this paper, a mobile-based learning prototype concerning the learning topic ”logic circuit design” is presented which considers the mentioned aspects to support independent practice. The prototype includes four different representations: (i) code-based (Verilog hardware description language), (ii) graphical-based (gate-level view), (iii) Boolean function, and (iv) truth table for each gate. The proposed learning system divides the learning content into different sections to support independent practice in meaningful steps. Multiple representations are included in order to foster understanding and transfer. The resulting implications for future work are discussed.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121008605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Future of Human-Food Interaction 人类与食物互动的未来
Jialin Deng, Yan Wang, Carlos Velasco, Ferran Altarriba Bertran, R. Comber, Marianna Obrist, K. Isbister, C. Spence, F. Mueller
There is an increasing interest in food within the HCI discipline, with many interactive prototypes emerging that augment, extend and challenge the various ways people engage with food, ranging from growing plants, cooking ingredients, serving dishes and eating together. Grounding theory is also emerging that in particular draws from embodied interactions, highlighting the need to consider not only instrumental, but also experiential factors specific to human-food interactions. Considering this, we are provided with an opportunity to extend human-food interactions through knowledge gained from designing novel systems emerging through technical advances. This workshop aims to explore the possibility of bringing practitioners, researchers and theorists together to discuss the future of human-food interaction with a particular highlight on the design of experiential aspects of human-food interactions beyond the instrumental. This workshop extends prior community building efforts in this area and hence explicitly invites submissions concerning the empirically-informed knowledge of how technologies can enrich eating experiences. In doing so, people will benefit not only from new technologies around food, but also incorporate the many rich benefits that are associated with eating, especially when eating with others.
在人机交互学科中,人们对食物的兴趣越来越大,许多互动原型的出现增强、扩展和挑战了人们与食物互动的各种方式,从种植植物、烹饪配料、上菜到一起吃饭。接地理论也正在兴起,特别是从具体的相互作用中汲取,强调不仅需要考虑人类与食物相互作用的工具因素,还需要考虑具体的经验因素。考虑到这一点,我们有机会通过技术进步设计新系统所获得的知识来扩展人类与食物的相互作用。本次研讨会旨在探索将实践者、研究人员和理论家聚集在一起讨论人与食物互动的未来的可能性,特别强调人与食物互动的体验方面的设计,而不是工具。本次研讨会扩展了之前在这一领域的社区建设工作,因此明确邀请提交有关技术如何丰富饮食体验的经验知识。这样,人们不仅可以从食品方面的新技术中受益,还可以从与饮食有关的许多丰富益处中获益,特别是与他人一起用餐时。
{"title":"The Future of Human-Food Interaction","authors":"Jialin Deng, Yan Wang, Carlos Velasco, Ferran Altarriba Bertran, R. Comber, Marianna Obrist, K. Isbister, C. Spence, F. Mueller","doi":"10.1145/3411763.3441312","DOIUrl":"https://doi.org/10.1145/3411763.3441312","url":null,"abstract":"There is an increasing interest in food within the HCI discipline, with many interactive prototypes emerging that augment, extend and challenge the various ways people engage with food, ranging from growing plants, cooking ingredients, serving dishes and eating together. Grounding theory is also emerging that in particular draws from embodied interactions, highlighting the need to consider not only instrumental, but also experiential factors specific to human-food interactions. Considering this, we are provided with an opportunity to extend human-food interactions through knowledge gained from designing novel systems emerging through technical advances. This workshop aims to explore the possibility of bringing practitioners, researchers and theorists together to discuss the future of human-food interaction with a particular highlight on the design of experiential aspects of human-food interactions beyond the instrumental. This workshop extends prior community building efforts in this area and hence explicitly invites submissions concerning the empirically-informed knowledge of how technologies can enrich eating experiences. In doing so, people will benefit not only from new technologies around food, but also incorporate the many rich benefits that are associated with eating, especially when eating with others.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121697656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Instrumeteor: Authoring tool for Guitar Performance Video 工具:创作工具的吉他表演视频
Yuichi Atarashi
To show off their playing, musicians publish musical performance videos on streaming services. In order to find out typical characteristics of guitar performance videos, we carried out a quantitative survey of guitar performance videos. Then, we discuss key problems of creating effects informed by the survey. According to the discussion, authoring videos with typical effects takes a long time even for experienced users because they typically need to combine multiple video tracks (e.g., lyrics and videos shot from multiple angles) into a single track. They need to synchronize all tracks with the musical piece and set transitions between them at the right timing, aware of the musical structure. This paper presents Instrumeteor, an authoring tool for musical performance videos. First, it automatically analyzes the musical structure in the tracks to align them on a single timeline. Second, it implements typical video effects informed by the survey. In this way, our tool reduces manual work and unleashes the musicians’ creativity.
为了炫耀自己的演奏,音乐家们在流媒体服务上发布音乐表演视频。为了找出吉他表演视频的典型特征,我们对吉他表演视频进行了定量调查。然后,我们讨论了通过调查产生效果的关键问题。根据讨论,即使对于经验丰富的用户来说,创作具有典型效果的视频也需要很长时间,因为他们通常需要将多个视频轨道(例如,从多个角度拍摄的歌词和视频)组合到一个轨道中。他们需要将所有音轨与音乐作品同步,并在适当的时间设置它们之间的过渡,意识到音乐结构。本文介绍了一种音乐表演视频的创作工具。首先,它会自动分析音轨中的音乐结构,使它们在一个时间轴上对齐。其次,它实现了典型的视频效果的调查。通过这种方式,我们的工具减少了手工工作,释放了音乐家的创造力。
{"title":"Instrumeteor: Authoring tool for Guitar Performance Video","authors":"Yuichi Atarashi","doi":"10.1145/3411763.3451521","DOIUrl":"https://doi.org/10.1145/3411763.3451521","url":null,"abstract":"To show off their playing, musicians publish musical performance videos on streaming services. In order to find out typical characteristics of guitar performance videos, we carried out a quantitative survey of guitar performance videos. Then, we discuss key problems of creating effects informed by the survey. According to the discussion, authoring videos with typical effects takes a long time even for experienced users because they typically need to combine multiple video tracks (e.g., lyrics and videos shot from multiple angles) into a single track. They need to synchronize all tracks with the musical piece and set transitions between them at the right timing, aware of the musical structure. This paper presents Instrumeteor, an authoring tool for musical performance videos. First, it automatically analyzes the musical structure in the tracks to align them on a single timeline. Second, it implements typical video effects informed by the survey. In this way, our tool reduces manual work and unleashes the musicians’ creativity.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121355529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
TactiHelm: Tactile Feedback in a Cycling Helmet for Collision Avoidance 触觉头盔:避免碰撞的自行车头盔的触觉反馈
Dong-Bach Vo, Julia Saari, S. Brewster
This paper introduces TactiHelm, a helmet that can inform cyclists about potential collisions. To inform the design of TactiHelm, a survey on cycling safety was conducted. The results highlighted the need for a support system to inform on location and proximity of surrounding vehicles. A set of tactile cues for TactiHelm conveying proximity and directions of the collisions were designed and evaluated. The results show that participants could correctly identify proximity up to 91% and directions up to 85% when tactile cues were delivered on the head, making TactiHelm a suitable device for notifications when cycling.
本文介绍了一种名为“战术头盔”的头盔,它可以提醒骑车人潜在的碰撞。为了给战术头盔的设计提供信息,对自行车的安全性进行了调查。结果表明,需要一个支持系统来告知周围车辆的位置和距离。设计并评估了一套反映碰撞距离和方向的触觉线索。结果表明,当触觉提示传递到头上时,参与者可以正确识别高达91%的距离和高达85%的方向,这使得战术头盔成为骑车时通知的合适设备。
{"title":"TactiHelm: Tactile Feedback in a Cycling Helmet for Collision Avoidance","authors":"Dong-Bach Vo, Julia Saari, S. Brewster","doi":"10.1145/3411763.3451580","DOIUrl":"https://doi.org/10.1145/3411763.3451580","url":null,"abstract":"This paper introduces TactiHelm, a helmet that can inform cyclists about potential collisions. To inform the design of TactiHelm, a survey on cycling safety was conducted. The results highlighted the need for a support system to inform on location and proximity of surrounding vehicles. A set of tactile cues for TactiHelm conveying proximity and directions of the collisions were designed and evaluated. The results show that participants could correctly identify proximity up to 91% and directions up to 85% when tactile cues were delivered on the head, making TactiHelm a suitable device for notifications when cycling.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114015502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
LightPaintAR: Assist Light Painting Photography with Augmented Reality LightPaintAR:辅助光绘画摄影与增强现实
Tianyi Wang, Xun Qian, F. He, K. Ramani
Light painting photos are created by moving light sources in mid-air while taking a long exposure photo. However, it is challenging for novice users to leave accurate light traces without any spatial guidance. Therefore, we present LightPaintAR, a novel interface that leverages augmented reality (AR) traces as a spatial reference to enable precise movement of the light sources. LightPaintAR allows users to draft, edit, and adjust virtual light traces in AR, and move light sources along the AR traces to generate accurate light traces on photos. With LightPaintAR, users can light paint complex patterns with multiple strokes and colors. We evaluate the effectiveness and the usability of our system with a user study and showcase multiple light paintings created by the users. Further, we discuss future improvements of LightPaintAR.
光画照片是在拍摄长曝光照片时,在半空中移动光源而产生的。然而,对于新手用户来说,在没有任何空间引导的情况下留下准确的光迹是具有挑战性的。因此,我们提出了LightPaintAR,这是一种利用增强现实(AR)轨迹作为空间参考的新型界面,可以实现光源的精确移动。LightPaintAR允许用户在AR中起草、编辑和调整虚拟光迹,并沿着AR迹移动光源,以在照片上生成准确的光迹。使用LightPaintAR,用户可以用多种笔触和颜色轻绘复杂的图案。我们通过用户研究来评估我们系统的有效性和可用性,并展示了用户创建的多个光画。进一步,我们讨论了LightPaintAR未来的改进。
{"title":"LightPaintAR: Assist Light Painting Photography with Augmented Reality","authors":"Tianyi Wang, Xun Qian, F. He, K. Ramani","doi":"10.1145/3411763.3451672","DOIUrl":"https://doi.org/10.1145/3411763.3451672","url":null,"abstract":"Light painting photos are created by moving light sources in mid-air while taking a long exposure photo. However, it is challenging for novice users to leave accurate light traces without any spatial guidance. Therefore, we present LightPaintAR, a novel interface that leverages augmented reality (AR) traces as a spatial reference to enable precise movement of the light sources. LightPaintAR allows users to draft, edit, and adjust virtual light traces in AR, and move light sources along the AR traces to generate accurate light traces on photos. With LightPaintAR, users can light paint complex patterns with multiple strokes and colors. We evaluate the effectiveness and the usability of our system with a user study and showcase multiple light paintings created by the users. Further, we discuss future improvements of LightPaintAR.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122442769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Virtual Global Landmark: An Augmented Reality Technique to Improve Spatial Navigation Learning 虚拟全球地标:提高空间导航学习的增强现实技术
Avinash Kumar Singh, Jia Liu, C. A. T. Cortes, Chin-Teng Lin
Navigation is a multifaceted human ability involving complex cognitive functions. It allows the active exploration of unknown environments without becoming lost while enabling us to move efficiently across well-known spaces. However, the increasing reliance on navigation assistance systems reduces surrounding environment processing and decreases spatial knowledge acquisition and thus orienting ability. To prevent such a skill loss induced by current navigation support systems like Google Maps, we propose a novel landmark technique in augmented reality (AR): the virtual global landmark (VGL). This technique seeks to help navigation and promote spatial learning. We conducted a pilot study with five participants to compare the directional arrows with VGL. Our result suggests that the participants learned more about the environment while navigation using VGL than directional arrows without any significant mental workload increase. The results have a substantial impact on the future of our navigation system.
导航是一种涉及复杂认知功能的多方面的人类能力。它允许我们在不迷路的情况下积极探索未知的环境,同时使我们能够在熟悉的空间中高效地移动。然而,对导航辅助系统的依赖减少了对周围环境的处理,降低了空间知识的获取,从而降低了定位能力。为了防止谷歌地图等现有导航支持系统导致的技能损失,我们提出了一种新的增强现实(AR)地标技术:虚拟全球地标(VGL)。这种技术旨在帮助导航和促进空间学习。我们对五名参与者进行了初步研究,以比较方向箭头与VGL。我们的研究结果表明,参与者在使用VGL导航时比使用方向箭头更了解环境,而没有明显的心理负荷增加。研究结果对我们未来的导航系统有重大影响。
{"title":"Virtual Global Landmark: An Augmented Reality Technique to Improve Spatial Navigation Learning","authors":"Avinash Kumar Singh, Jia Liu, C. A. T. Cortes, Chin-Teng Lin","doi":"10.1145/3411763.3451634","DOIUrl":"https://doi.org/10.1145/3411763.3451634","url":null,"abstract":"Navigation is a multifaceted human ability involving complex cognitive functions. It allows the active exploration of unknown environments without becoming lost while enabling us to move efficiently across well-known spaces. However, the increasing reliance on navigation assistance systems reduces surrounding environment processing and decreases spatial knowledge acquisition and thus orienting ability. To prevent such a skill loss induced by current navigation support systems like Google Maps, we propose a novel landmark technique in augmented reality (AR): the virtual global landmark (VGL). This technique seeks to help navigation and promote spatial learning. We conducted a pilot study with five participants to compare the directional arrows with VGL. Our result suggests that the participants learned more about the environment while navigation using VGL than directional arrows without any significant mental workload increase. The results have a substantial impact on the future of our navigation system.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122869818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
JINSense: Repurposing Electrooculography Sensors on Smart Glass for Midair Gesture and Context Sensing JINSense:在智能玻璃上重新利用眼电传感器进行空中手势和环境感知
H. Yeo, Juyoung Lee, Woontack Woo, H. Koike, A. Quigley, K. Kunze
In this work, we explore a new sensing technique for smart eyewear equipped with Electrooculography (EOG) sensors. We repurpose the EOG sensors embedded in a JINS MEME smart eyewear, originally designed to detect eye movement, to detect midair hand gestures. We also explore the potential of sensing human proximity, rubbing action and to differentiate materials and objects using this sensor. This new found sensing capabilities enable a various types of novel input and interaction scenarios for such wearable eyewear device, whether it is worn on body or resting on a desk.
在这项工作中,我们探索了一种新的智能眼镜传感技术,配备了眼电(EOG)传感器。我们重新设计了嵌入在JINS MEME智能眼镜中的EOG传感器,最初设计用于检测眼球运动,以检测空中手势。我们还探索了使用该传感器感知人类接近、摩擦动作以及区分材料和物体的潜力。这种新发现的传感能力为这种可穿戴眼镜设备提供了各种新颖的输入和交互场景,无论是戴在身上还是放在桌子上。
{"title":"JINSense: Repurposing Electrooculography Sensors on Smart Glass for Midair Gesture and Context Sensing","authors":"H. Yeo, Juyoung Lee, Woontack Woo, H. Koike, A. Quigley, K. Kunze","doi":"10.1145/3411763.3451741","DOIUrl":"https://doi.org/10.1145/3411763.3451741","url":null,"abstract":"In this work, we explore a new sensing technique for smart eyewear equipped with Electrooculography (EOG) sensors. We repurpose the EOG sensors embedded in a JINS MEME smart eyewear, originally designed to detect eye movement, to detect midair hand gestures. We also explore the potential of sensing human proximity, rubbing action and to differentiate materials and objects using this sensor. This new found sensing capabilities enable a various types of novel input and interaction scenarios for such wearable eyewear device, whether it is worn on body or resting on a desk.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131362663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
BlahBlahBot: Facilitating Conversation between Strangers using a Chatbot with ML-infused Personalized Topic Suggestion BlahBlahBot:使用具有ml注入的个性化主题建议的聊天机器人促进陌生人之间的对话
Donghoon Shin, Sang-Taek Yoon, Soomin Kim, Joonhwan Lee
It is a prevalent behavior of having a chat with strangers in online settings where people can easily gather. Yet, people often find it difficult to initiate and maintain conversation due to the lack of information about strangers. Hence, we aimed to facilitate conversation between the strangers with the use of machine learning (ML) algorithms and present BlahBlahBot, an ML-infused chatbot that moderates conversation between strangers with personalized topics. Based on social media posts, BlahBlahBot supports the conversation by suggesting topics that are likely to be of mutual interest between users. A user study with three groups (control, random topic chatbot, and BlahBlahBot; N=18) found the feasibility of BlahBlahBot in increasing both conversation quality and closeness to the partner, along with the factors that led to such increases from the user interview. Overall, our preliminary results imply that an ML-infused conversational agent can be effective for augmenting a dyadic conversation.
在人们容易聚集的网络环境中,与陌生人聊天是一种普遍的行为。然而,由于缺乏关于陌生人的信息,人们常常发现很难发起和保持对话。因此,我们的目标是通过使用机器学习(ML)算法来促进陌生人之间的对话,并提出了BlahBlahBot,这是一个注入ML的聊天机器人,可以用个性化的话题来调节陌生人之间的对话。基于社交媒体帖子,BlahBlahBot通过建议用户之间可能共同感兴趣的话题来支持对话。三组用户研究(对照组、随机话题聊天机器人和BlahBlahBot;N=18)发现了BlahBlahBot在提高对话质量和与合作伙伴的亲密度方面的可行性,以及从用户访谈中导致这种增加的因素。总的来说,我们的初步结果表明,注入ml的会话代理可以有效地增强二元对话。
{"title":"BlahBlahBot: Facilitating Conversation between Strangers using a Chatbot with ML-infused Personalized Topic Suggestion","authors":"Donghoon Shin, Sang-Taek Yoon, Soomin Kim, Joonhwan Lee","doi":"10.1145/3411763.3451771","DOIUrl":"https://doi.org/10.1145/3411763.3451771","url":null,"abstract":"It is a prevalent behavior of having a chat with strangers in online settings where people can easily gather. Yet, people often find it difficult to initiate and maintain conversation due to the lack of information about strangers. Hence, we aimed to facilitate conversation between the strangers with the use of machine learning (ML) algorithms and present BlahBlahBot, an ML-infused chatbot that moderates conversation between strangers with personalized topics. Based on social media posts, BlahBlahBot supports the conversation by suggesting topics that are likely to be of mutual interest between users. A user study with three groups (control, random topic chatbot, and BlahBlahBot; N=18) found the feasibility of BlahBlahBot in increasing both conversation quality and closeness to the partner, along with the factors that led to such increases from the user interview. Overall, our preliminary results imply that an ML-infused conversational agent can be effective for augmenting a dyadic conversation.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130197396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Fake Moods: Can Users Trick an Emotion-Aware VoiceBot? 假情绪:用户能骗过感知情绪的语音机器人吗?
Yong Ma, Heiko Drewes, A. Butz
The ability to deal properly with emotion could be a critical feature of future VoiceBots. Humans might even choose to use fake emotions, e.g., sound angry to emphasize what they are saying or sound nice to get what they want. However, it is unclear whether current emotion detection methods detect such acted emotions properly, or rather the true emotion of the speaker. We asked a small number of participants (26) to mimic five basic emotions and used an open source emotion-in-voice detector to provide feedback on whether their acted emotion was recognized as intended. We found that it was difficult for participants to mimic all five emotions and that certain emotions were easier to mimic than others. However, it remains unclear whether this is due to the fact that emotion was only acted or due to the insufficiency of the detection software. As an intended side effect, we collected a small corpus of labeled data for acted emotion in speech, which we plan to extend and eventually use as training data for our own emotion detection. We present the study setup and discuss some insights on our results.
正确处理情绪的能力可能是未来语音机器人的一个关键特征。人类甚至可能会选择使用虚假的情绪,例如,用生气的声音来强调他们所说的话,或者用友善的声音来得到他们想要的东西。然而,目前尚不清楚的是,目前的情绪检测方法是否能正确地检测到这种行为情绪,或者更确切地说,是说话者的真实情绪。我们要求一小部分参与者(26人)模仿五种基本情绪,并使用开源的声音情绪检测器来提供反馈,以确定他们所表现的情绪是否被识别为预期的。我们发现,参与者很难模仿所有五种情绪,而某些情绪比其他情绪更容易模仿。然而,目前尚不清楚这是由于情绪只是表现出来的,还是由于检测软件的不足。作为预期的副作用,我们收集了一个小的标记数据语料库,用于语音中表现的情感,我们计划扩展并最终将其用作我们自己的情感检测的训练数据。我们介绍了研究设置,并讨论了对我们结果的一些见解。
{"title":"Fake Moods: Can Users Trick an Emotion-Aware VoiceBot?","authors":"Yong Ma, Heiko Drewes, A. Butz","doi":"10.1145/3411763.3451744","DOIUrl":"https://doi.org/10.1145/3411763.3451744","url":null,"abstract":"The ability to deal properly with emotion could be a critical feature of future VoiceBots. Humans might even choose to use fake emotions, e.g., sound angry to emphasize what they are saying or sound nice to get what they want. However, it is unclear whether current emotion detection methods detect such acted emotions properly, or rather the true emotion of the speaker. We asked a small number of participants (26) to mimic five basic emotions and used an open source emotion-in-voice detector to provide feedback on whether their acted emotion was recognized as intended. We found that it was difficult for participants to mimic all five emotions and that certain emotions were easier to mimic than others. However, it remains unclear whether this is due to the fact that emotion was only acted or due to the insufficiency of the detection software. As an intended side effect, we collected a small corpus of labeled data for acted emotion in speech, which we plan to extend and eventually use as training data for our own emotion detection. We present the study setup and discuss some insights on our results.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126977117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Peril and Potential of XR-based Interactions with Wildlife 基于x射线与野生动物相互作用的危险和潜力
Daniel Pimentel
In “Being a Beast”, Charles Foster recounts living with, and as, wildlife (e.g., otters, foxes). These encounters, he contends, forge human-nature connections which have waned, negatively impacting biodiversity conservation. Yet, we need not live amidst beasts to bridge the human-nature gap. Cross-reality (XR) platforms (i.e., virtual and augmented reality) have the unique capacity to facilitate pseudo interactions with, and as, wildlife, connecting audiences to the plight of endangered species. However, XR-based wildlife interaction, I argue, is a double-edged sword whose implementation warrants as much attention in HCI as in environmental science. In this paper I highlight the promise of XR-based wildlife encounters, and discuss dilemmas facing developers tasked with fabricating mediated interactions with wildlife. I critique this approach by outlining how such experiences may negatively affect humans and the survivability of the very species seeking to benefit from them.
在《成为野兽》一书中,查尔斯·福斯特讲述了自己与野生动物(如水獭、狐狸)一起生活,或者作为野生动物生活的经历。他认为,这些接触建立了人类与自然的联系,而这种联系已经减弱,对生物多样性的保护产生了负面影响。然而,我们不需要生活在野兽中间来弥合人与自然的差距。跨现实(XR)平台(即虚拟和增强现实)具有独特的能力,可以促进与野生动物的虚拟交互,并将其作为野生动物,将观众与濒危物种的困境联系起来。然而,我认为,基于x射线的野生动物互动是一把双刃剑,它的实施在HCI和环境科学中都值得关注。在本文中,我强调了基于xr的野生动物接触的前景,并讨论了开发人员在与野生动物进行中介交互时面临的困境。我通过概述这些经历如何对人类和寻求从中受益的物种的生存能力产生负面影响来批评这种方法。
{"title":"The Peril and Potential of XR-based Interactions with Wildlife","authors":"Daniel Pimentel","doi":"10.1145/3411763.3450378","DOIUrl":"https://doi.org/10.1145/3411763.3450378","url":null,"abstract":"In “Being a Beast”, Charles Foster recounts living with, and as, wildlife (e.g., otters, foxes). These encounters, he contends, forge human-nature connections which have waned, negatively impacting biodiversity conservation. Yet, we need not live amidst beasts to bridge the human-nature gap. Cross-reality (XR) platforms (i.e., virtual and augmented reality) have the unique capacity to facilitate pseudo interactions with, and as, wildlife, connecting audiences to the plight of endangered species. However, XR-based wildlife interaction, I argue, is a double-edged sword whose implementation warrants as much attention in HCI as in environmental science. In this paper I highlight the promise of XR-based wildlife encounters, and discuss dilemmas facing developers tasked with fabricating mediated interactions with wildlife. I critique this approach by outlining how such experiences may negatively affect humans and the survivability of the very species seeking to benefit from them.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127011698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1