首页 > 最新文献

Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems最新文献

英文 中文
The Disagreement Deconvolution: Bringing Machine Learning Performance Metrics In Line With Reality 分歧反卷积:使机器学习性能指标符合现实
Pub Date : 2021-05-06 DOI: 10.1145/3411764.3445423
Mitchell L. Gordon, Kaitlyn Zhou, Kayur Patel, Tatsunori B. Hashimoto, Michael S. Bernstein
Machine learning classifiers for human-facing tasks such as comment toxicity and misinformation often score highly on metrics such as ROC AUC but are received poorly in practice. Why this gap? Today, metrics such as ROC AUC, precision, and recall are used to measure technical performance; however, human-computer interaction observes that evaluation of human-facing systems should account for people’s reactions to the system. In this paper, we introduce a transformation that more closely aligns machine learning classification metrics with the values and methods of user-facing performance measures. The disagreement deconvolution takes in any multi-annotator (e.g., crowdsourced) dataset, disentangles stable opinions from noise by estimating intra-annotator consistency, and compares each test set prediction to the individual stable opinions from each annotator. Applying the disagreement deconvolution to existing social computing datasets, we find that current metrics dramatically overstate the performance of many human-facing machine learning tasks: for example, performance on a comment toxicity task is corrected from .95 to .73 ROC AUC.
机器学习分类器用于面向人类的任务,如评论毒性和错误信息,通常在ROC AUC等指标上得分很高,但在实践中接受度很低。为什么会有这样的差距?今天,诸如ROC AUC、精度和召回率等指标被用来衡量技术性能;然而,人机交互观察到,对面向人类的系统的评估应该考虑到人们对系统的反应。在本文中,我们引入了一种转换,该转换将机器学习分类度量与面向用户的性能度量的值和方法更紧密地结合起来。分歧反卷积采用任何多注释器(例如众包)数据集,通过估计注释器内部一致性从噪声中分离出稳定意见,并将每个测试集预测与每个注释器的单个稳定意见进行比较。将分歧反卷积应用于现有的社会计算数据集,我们发现当前的指标显着夸大了许多面向人类的机器学习任务的性能:例如,评论毒性任务的性能从0.95校正到0.73 ROC AUC。
{"title":"The Disagreement Deconvolution: Bringing Machine Learning Performance Metrics In Line With Reality","authors":"Mitchell L. Gordon, Kaitlyn Zhou, Kayur Patel, Tatsunori B. Hashimoto, Michael S. Bernstein","doi":"10.1145/3411764.3445423","DOIUrl":"https://doi.org/10.1145/3411764.3445423","url":null,"abstract":"Machine learning classifiers for human-facing tasks such as comment toxicity and misinformation often score highly on metrics such as ROC AUC but are received poorly in practice. Why this gap? Today, metrics such as ROC AUC, precision, and recall are used to measure technical performance; however, human-computer interaction observes that evaluation of human-facing systems should account for people’s reactions to the system. In this paper, we introduce a transformation that more closely aligns machine learning classification metrics with the values and methods of user-facing performance measures. The disagreement deconvolution takes in any multi-annotator (e.g., crowdsourced) dataset, disentangles stable opinions from noise by estimating intra-annotator consistency, and compares each test set prediction to the individual stable opinions from each annotator. Applying the disagreement deconvolution to existing social computing datasets, we find that current metrics dramatically overstate the performance of many human-facing machine learning tasks: for example, performance on a comment toxicity task is corrected from .95 to .73 ROC AUC.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"112 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89343242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 89
LightTouch Gadgets: Extending Interactions on Capacitive Touchscreens by Converting Light Emission to Touch Inputs 轻触小工具:通过将光发射转换为触摸输入来扩展电容触摸屏上的交互
Pub Date : 2021-05-06 DOI: 10.1145/3411764.3445581
Kaori Ikematsu, Kunihiro Kato, Y. Kawahara
We present LightTouch, a 3D-printed passive gadget to enhance touch interactions on unmodified capacitive touchscreens. The LightTouch gadgets simulate finger operations such as tapping, swiping, and multi-touch gestures by means of conductive materials and light-dependent resistors (LDR) embedded in the object. The touchscreen emits visible light and the LDR senses the level of this light, which changes its resistance value. By controlling the screen brightness, it intentionally connects or disconnects the path between the GND and the touchscreen, thus allowing the touch inputs to be controlled. In contrast to conventional physical extensions for touchscreens, our technique requires neither continuous finger contact on the conductive part nor the use of batteries. As such, it opens up new possibilities for touchscreen interactions beyond the simple automation of touch inputs, such as establishing a communication channel between devices, enhancing the trackability of tangibles, and inter-application operations.
我们提出LightTouch,一个3d打印的无源小工具,以增强未经修改的电容触摸屏上的触摸交互。LightTouch小工具通过嵌入物体的导电材料和光相关电阻(LDR)来模拟手指操作,如敲击、滑动和多点触摸手势。触摸屏发出可见光,LDR感知到这种光的水平,从而改变其电阻值。通过控制屏幕亮度,它有意地连接或断开GND和触摸屏之间的路径,从而允许控制触摸输入。与传统触摸屏的物理延伸不同,我们的技术既不需要手指连续接触导电部分,也不需要使用电池。因此,它为触摸屏交互开辟了新的可能性,超越了简单的触摸输入自动化,例如在设备之间建立通信通道,增强有形物品的可追踪性,以及应用程序之间的操作。
{"title":"LightTouch Gadgets: Extending Interactions on Capacitive Touchscreens by Converting Light Emission to Touch Inputs","authors":"Kaori Ikematsu, Kunihiro Kato, Y. Kawahara","doi":"10.1145/3411764.3445581","DOIUrl":"https://doi.org/10.1145/3411764.3445581","url":null,"abstract":"We present LightTouch, a 3D-printed passive gadget to enhance touch interactions on unmodified capacitive touchscreens. The LightTouch gadgets simulate finger operations such as tapping, swiping, and multi-touch gestures by means of conductive materials and light-dependent resistors (LDR) embedded in the object. The touchscreen emits visible light and the LDR senses the level of this light, which changes its resistance value. By controlling the screen brightness, it intentionally connects or disconnects the path between the GND and the touchscreen, thus allowing the touch inputs to be controlled. In contrast to conventional physical extensions for touchscreens, our technique requires neither continuous finger contact on the conductive part nor the use of batteries. As such, it opens up new possibilities for touchscreen interactions beyond the simple automation of touch inputs, such as establishing a communication channel between devices, enhancing the trackability of tangibles, and inter-application operations.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90022116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Effects of Communication Directionality and AI Agent Differences in Human-AI Interaction 人-人工智能交互中通信方向性和AI Agent差异的影响
Pub Date : 2021-05-06 DOI: 10.1145/3411764.3445256
Zahra Ashktorab, Casey Dugan, James M. Johnson, Qian Pan, Wei Zhang, Sadhana Kumaravel, Murray Campbell
In Human-AI collaborative settings that are inherently interactive, direction of communication plays a role in how users perceive their AI partners. In an AI-driven cooperative game with partially observable information, players (be it the AI or the human player) require their actions to be interpreted accurately by the other player to yield a successful outcome. In this paper, we investigate social perceptions of AI agents with various directions of communication in a cooperative game setting. We measure subjective social perceptions (rapport, intelligence, and likeability) of participants towards their partners when participants believe they are playing with an AI or with a human and the nature of the communication (responsiveness and leading roles). We ran a large scale study on Mechanical Turk (n=199) of this collaborative game and find significant differences in gameplay outcome and social perception across different AI agents, different directions of communication and when the agent is perceived to be an AI/Human. We find that the bias against the AI that has been demonstrated in prior studies varies with the direction of the communication and with the AI agent.
在人类与人工智能的协作环境中,沟通的方向在用户如何看待他们的人工智能合作伙伴方面起着重要作用。在带有部分可观察信息的AI驱动的合作游戏中,玩家(无论是AI还是人类玩家)都要求自己的行动能够被其他玩家准确地解释,从而产生成功的结果。在本文中,我们研究了在合作博弈环境下具有不同通信方向的人工智能代理的社会感知。当参与者认为他们正在与人工智能或人类一起玩时,我们测量参与者对其合作伙伴的主观社会感知(融洽、智力和受欢迎程度)以及交流的性质(响应性和领导角色)。我们对这款协作游戏的《Mechanical Turk》(n=199)进行了大规模研究,发现不同AI代理、不同交流方向以及代理被认为是AI/人类时的玩法结果和社会感知存在显著差异。我们发现,在先前的研究中已经证明的对人工智能的偏见随着通信的方向和与人工智能代理的关系而变化。
{"title":"Effects of Communication Directionality and AI Agent Differences in Human-AI Interaction","authors":"Zahra Ashktorab, Casey Dugan, James M. Johnson, Qian Pan, Wei Zhang, Sadhana Kumaravel, Murray Campbell","doi":"10.1145/3411764.3445256","DOIUrl":"https://doi.org/10.1145/3411764.3445256","url":null,"abstract":"In Human-AI collaborative settings that are inherently interactive, direction of communication plays a role in how users perceive their AI partners. In an AI-driven cooperative game with partially observable information, players (be it the AI or the human player) require their actions to be interpreted accurately by the other player to yield a successful outcome. In this paper, we investigate social perceptions of AI agents with various directions of communication in a cooperative game setting. We measure subjective social perceptions (rapport, intelligence, and likeability) of participants towards their partners when participants believe they are playing with an AI or with a human and the nature of the communication (responsiveness and leading roles). We ran a large scale study on Mechanical Turk (n=199) of this collaborative game and find significant differences in gameplay outcome and social perception across different AI agents, different directions of communication and when the agent is perceived to be an AI/Human. We find that the bias against the AI that has been demonstrated in prior studies varies with the direction of the communication and with the AI agent.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87821086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Seeing Beyond Expert Blind Spots: Online Learning Design for Scale and Quality 超越专家盲点:在线学习设计的规模和质量
Pub Date : 2021-05-06 DOI: 10.1145/3411764.3445045
Xu Wang, C. Rosé, K. Koedinger
Maximizing system scalability and quality are sometimes at odds. This work provides an example showing scalability and quality can be achieved at the same time in instructional design, contrary to what instructors may believe or expect. We situate our study in the education of HCI methods, and provide suggestions to improve active learning within the HCI education community. While designing learning and assessment activities, many instructors face the choice of using open-ended or close-ended activities. Close-ended activities such as multiple-choice questions (MCQs) enable automated feedback to students. However, a survey with 22 HCI professors revealed a belief that MCQs are less valuable than open-ended questions, and thus, using them entails making a quality sacrifice in order to achieve scalability. A study with 178 students produced no evidence to support the teacher belief. This paper indicates more promise than concern in using MCQs for scalable instruction and assessment in at least some HCI domains.
最大化系统可伸缩性和质量有时是不一致的。这项工作提供了一个例子,表明在教学设计中可以同时实现可扩展性和质量,这与教师可能相信或期望的相反。我们将我们的研究定位于HCI方法的教育,并提供建议,以改善HCI教育界的主动学习。在设计学习和评估活动时,许多教师面临着使用开放式或封闭式活动的选择。封闭式活动,如多项选择题(mcq)可以自动反馈给学生。然而,一项针对22位HCI教授的调查显示,他们认为mcq不如开放式问题有价值,因此,为了实现可扩展性,使用mcq需要牺牲质量。一项针对178名学生的研究没有证据支持教师的观点。本文指出,在至少一些HCI领域中,使用mcq进行可扩展的指令和评估的希望大于关注。
{"title":"Seeing Beyond Expert Blind Spots: Online Learning Design for Scale and Quality","authors":"Xu Wang, C. Rosé, K. Koedinger","doi":"10.1145/3411764.3445045","DOIUrl":"https://doi.org/10.1145/3411764.3445045","url":null,"abstract":"Maximizing system scalability and quality are sometimes at odds. This work provides an example showing scalability and quality can be achieved at the same time in instructional design, contrary to what instructors may believe or expect. We situate our study in the education of HCI methods, and provide suggestions to improve active learning within the HCI education community. While designing learning and assessment activities, many instructors face the choice of using open-ended or close-ended activities. Close-ended activities such as multiple-choice questions (MCQs) enable automated feedback to students. However, a survey with 22 HCI professors revealed a belief that MCQs are less valuable than open-ended questions, and thus, using them entails making a quality sacrifice in order to achieve scalability. A study with 178 students produced no evidence to support the teacher belief. This paper indicates more promise than concern in using MCQs for scalable instruction and assessment in at least some HCI domains.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87945727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Styling Words: A Simple and Natural Way to Increase Variability in Training Data Collection for Gesture Recognition 样式词:一种增加手势识别训练数据收集可变性的简单而自然的方法
Pub Date : 2021-05-06 DOI: 10.1145/3411764.3445457
Woojin Kang, Intaek Jung, Daeho Lee, Jin-Hyuk Hong
Due to advances in deep learning, gestures have become a more common tool for human-computer interaction. When implementing a large amount of training data, deep learning models show remarkable performance in gesture recognition. Since it is expensive and time consuming to collect gesture data from people, we are often confronted with a practicality issue when managing the quantity and quality of training data. It is a well-known fact that increasing training data variability can help to improve the generalization performance of machine learning models. Thus, we directly intervene in the collection of gesture data to increase human gesture variability by adding some words (called styling words) into the data collection instructions, e.g., giving the instruction "perform gesture #1 faster" as opposed to "perform gesture #1." Through an in-depth analysis of gesture features and video-based gesture recognition, we have confirmed the advantageous use of styling words in gesture training data collection.
由于深度学习的进步,手势已经成为一种更常见的人机交互工具。在实现大量训练数据时,深度学习模型在手势识别方面表现出显著的性能。由于人体手势数据的采集成本高、耗时长,在管理训练数据的数量和质量时,我们经常面临一个实用性问题。众所周知,增加训练数据的可变性有助于提高机器学习模型的泛化性能。因此,我们直接干预手势数据的收集,通过在数据收集指令中添加一些单词(称为样式词)来增加人类手势的可变性,例如,给出指令“更快地执行手势#1”,而不是“执行手势#1”。通过对手势特征和基于视频的手势识别的深入分析,我们证实了样式词在手势训练数据收集中的优势。
{"title":"Styling Words: A Simple and Natural Way to Increase Variability in Training Data Collection for Gesture Recognition","authors":"Woojin Kang, Intaek Jung, Daeho Lee, Jin-Hyuk Hong","doi":"10.1145/3411764.3445457","DOIUrl":"https://doi.org/10.1145/3411764.3445457","url":null,"abstract":"Due to advances in deep learning, gestures have become a more common tool for human-computer interaction. When implementing a large amount of training data, deep learning models show remarkable performance in gesture recognition. Since it is expensive and time consuming to collect gesture data from people, we are often confronted with a practicality issue when managing the quantity and quality of training data. It is a well-known fact that increasing training data variability can help to improve the generalization performance of machine learning models. Thus, we directly intervene in the collection of gesture data to increase human gesture variability by adding some words (called styling words) into the data collection instructions, e.g., giving the instruction \"perform gesture #1 faster\" as opposed to \"perform gesture #1.\" Through an in-depth analysis of gesture features and video-based gesture recognition, we have confirmed the advantageous use of styling words in gesture training data collection.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"2014 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86592841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
CoNotate: Suggesting Queries Based on Notes Promotes Knowledge Discovery CoNotate:基于笔记的查询建议促进了知识发现
Pub Date : 2021-05-06 DOI: 10.1145/3411764.3445618
Srishti Palani, Zijian Ding, Austin Nguyen, Andrew Chuang, S. Macneil, Steven W. Dow
When exploring a new domain through web search, people often struggle to articulate queries because they lack domain-specific language and well-defined informational goals. Perhaps search tools rely too much on the query to understand what a searcher wants. Towards expanding this contextual understanding of a user during exploratory search, we introduce a novel system, CoNotate, which offers query suggestions based on analyzing the searcher’s notes and previous searches for patterns and gaps in information. To evaluate this approach, we conducted a within-subjects study where participants (n=38) conducted exploratory searches using a baseline system (standard web search) and the CoNotate system. The CoNotate approach helped searchers issue significantly more queries, and discover more terminology than standard web search. This work demonstrates how search can leverage user-generated content to help people get started when exploring complex, multi-faceted information spaces.
当通过网络搜索探索一个新领域时,人们常常难以清晰地表达查询,因为他们缺乏特定于领域的语言和定义良好的信息目标。也许搜索工具过于依赖查询来理解搜索者想要什么。为了在探索性搜索过程中扩展用户对上下文的理解,我们引入了一个新的系统,CoNotate,它根据分析搜索者的笔记和以前的搜索模式和信息中的空白提供查询建议。为了评估这种方法,我们进行了一项受试者内研究,参与者(n=38)使用基线系统(标准网络搜索)和CoNotate系统进行探索性搜索。与标准的web搜索相比,contate方法帮助搜索者发出了更多的查询,并发现了更多的术语。这项工作展示了搜索如何利用用户生成的内容来帮助人们开始探索复杂的、多方面的信息空间。
{"title":"CoNotate: Suggesting Queries Based on Notes Promotes Knowledge Discovery","authors":"Srishti Palani, Zijian Ding, Austin Nguyen, Andrew Chuang, S. Macneil, Steven W. Dow","doi":"10.1145/3411764.3445618","DOIUrl":"https://doi.org/10.1145/3411764.3445618","url":null,"abstract":"When exploring a new domain through web search, people often struggle to articulate queries because they lack domain-specific language and well-defined informational goals. Perhaps search tools rely too much on the query to understand what a searcher wants. Towards expanding this contextual understanding of a user during exploratory search, we introduce a novel system, CoNotate, which offers query suggestions based on analyzing the searcher’s notes and previous searches for patterns and gaps in information. To evaluate this approach, we conducted a within-subjects study where participants (n=38) conducted exploratory searches using a baseline system (standard web search) and the CoNotate system. The CoNotate approach helped searchers issue significantly more queries, and discover more terminology than standard web search. This work demonstrates how search can leverage user-generated content to help people get started when exploring complex, multi-faceted information spaces.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86799610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Optimization-based User Support for Cinematographic Quadrotor Camera Target Framing 基于优化的电影四旋翼相机目标取景用户支持
Pub Date : 2021-05-06 DOI: 10.1145/3411764.3445568
Christoph Gebhardt, Otmar Hilliges
To create aesthetically pleasing aerial footage, the correct framing of camera targets is crucial. However, current quadrotor camera tools do not consider the 3D extent of actual camera targets in their optimization schemes and simply interpolate between keyframes when generating a trajectory. This can yield videos with aesthetically unpleasing target framing. In this paper, we propose a target framing algorithm that optimizes the quadrotor camera pose such that targets are positioned at desirable screen locations according to videographic compositional rules and entirely visible throughout a shot. Camera targets are identified using a semi-automatic pipeline which leverages a deep-learning-based visual saliency model. A large-scale perceptual study (N ≈ 500) shows that our method enables users to produce shots with a target framing that is closer to what they intended to create and more or as aesthetically pleasing than with the previous state of the art.
为了创造美观的航拍画面,相机目标的正确取景是至关重要的。然而,目前的四旋翼相机工具在其优化方案中没有考虑实际相机目标的3D程度,而只是在生成轨迹时在关键帧之间进行插值。这可能会产生具有美学上令人不快的目标框架的视频。在本文中,我们提出了一种目标分帧算法,该算法优化了四旋翼摄像机的姿势,使目标根据视频构图规则定位在理想的屏幕位置,并且在整个镜头中完全可见。相机目标识别使用半自动管道,利用深度学习为基础的视觉显著性模型。一项大规模的感知研究(N≈500)表明,我们的方法使用户能够以更接近他们想要创建的目标框架生成镜头,并且比以前的艺术状态更具有美感。
{"title":"Optimization-based User Support for Cinematographic Quadrotor Camera Target Framing","authors":"Christoph Gebhardt, Otmar Hilliges","doi":"10.1145/3411764.3445568","DOIUrl":"https://doi.org/10.1145/3411764.3445568","url":null,"abstract":"To create aesthetically pleasing aerial footage, the correct framing of camera targets is crucial. However, current quadrotor camera tools do not consider the 3D extent of actual camera targets in their optimization schemes and simply interpolate between keyframes when generating a trajectory. This can yield videos with aesthetically unpleasing target framing. In this paper, we propose a target framing algorithm that optimizes the quadrotor camera pose such that targets are positioned at desirable screen locations according to videographic compositional rules and entirely visible throughout a shot. Camera targets are identified using a semi-automatic pipeline which leverages a deep-learning-based visual saliency model. A large-scale perceptual study (N ≈ 500) shows that our method enables users to produce shots with a target framing that is closer to what they intended to create and more or as aesthetically pleasing than with the previous state of the art.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86380340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Generating the Presence of Remote Mourners: a Case Study of Funeral Webcasting in Japan 产生远程哀悼者的存在:日本葬礼网络直播的案例研究
Pub Date : 2021-05-06 DOI: 10.1145/3411764.3445617
Daisuke Uriu, Kenta Toshima, Minori Manabe, Takeru Yazaki, Takeshi Funatsu, Atsushi Izumihara, Zendai Kashino, Atsushi Hiyama, M. Inami
Funerals are irreplaceable events, especially for bereaved family members and relatives. However, the COVID-19 pandemic has prevented many people worldwide from attending their loved ones’ funerals. The authors had the opportunity to assist one family faced with this predicament by webcasting and recording funeral rites held near Tokyo in June, 2020. Using our original 360-degree Telepresence system and smartphones running Zoom, we enabled the deceased’s elder siblings to remotely attend the funeral and did our utmost to make them feel present in the funeral hall. Despite the webcasting via Zoom contributing more to their remote attendances than our system, we discovered thoughtful findings which could be useful for designing remote funeral attendances. From the findings, we also discuss how HCI designers can contribute to this highly sensitive issue, weaving together knowledge from various domains including techno-spiritual practices, thanato-sensitive designs; and other religious and cultural aspects related to death rituals.
葬礼是不可替代的活动,特别是对失去亲人的家庭成员和亲属。然而,新冠肺炎大流行使全球许多人无法参加亲人的葬礼。作者有机会通过网络直播和记录2020年6月在东京附近举行的葬礼来帮助一个面临这种困境的家庭。使用我们独创的360度远程呈现系统和运行Zoom的智能手机,我们使死者的哥哥姐姐能够远程参加葬礼,并尽我们最大的努力让他们感觉就在葬礼大厅里。尽管通过Zoom进行的网络直播比我们的系统对他们的远程出席贡献更多,但我们发现了一些深思熟虑的发现,这些发现可能对设计远程葬礼出席很有用。根据研究结果,我们还讨论了HCI设计师如何为这个高度敏感的问题做出贡献,将来自不同领域的知识编织在一起,包括技术精神实践,thanato敏感设计;以及其他与死亡仪式有关的宗教和文化方面。
{"title":"Generating the Presence of Remote Mourners: a Case Study of Funeral Webcasting in Japan","authors":"Daisuke Uriu, Kenta Toshima, Minori Manabe, Takeru Yazaki, Takeshi Funatsu, Atsushi Izumihara, Zendai Kashino, Atsushi Hiyama, M. Inami","doi":"10.1145/3411764.3445617","DOIUrl":"https://doi.org/10.1145/3411764.3445617","url":null,"abstract":"Funerals are irreplaceable events, especially for bereaved family members and relatives. However, the COVID-19 pandemic has prevented many people worldwide from attending their loved ones’ funerals. The authors had the opportunity to assist one family faced with this predicament by webcasting and recording funeral rites held near Tokyo in June, 2020. Using our original 360-degree Telepresence system and smartphones running Zoom, we enabled the deceased’s elder siblings to remotely attend the funeral and did our utmost to make them feel present in the funeral hall. Despite the webcasting via Zoom contributing more to their remote attendances than our system, we discovered thoughtful findings which could be useful for designing remote funeral attendances. From the findings, we also discuss how HCI designers can contribute to this highly sensitive issue, weaving together knowledge from various domains including techno-spiritual practices, thanato-sensitive designs; and other religious and cultural aspects related to death rituals.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"61 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89024958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
XRgonomics: Facilitating the Creation of Ergonomic 3D Interfaces XRgonomics:促进人体工程学3D界面的创建
Pub Date : 2021-05-06 DOI: 10.1145/3411764.3445349
João Marcelo Evangelista Belo, A. Feit, Tiare M. Feuchtner, Kaj Grønbæk
Arm discomfort is a common issue in Cross Reality applications involving prolonged mid-air interaction. Solving this problem is difficult because of the lack of tools and guidelines for 3D user interface design. Therefore, we propose a method to make existing ergonomic metrics available to creators during design by estimating the interaction cost at each reachable position in the user’s environment. We present XRgonomics, a toolkit to visualize the interaction cost and make it available at runtime, allowing creators to identify UI positions that optimize users’ comfort. Two scenarios show how the toolkit can support 3D UI design and dynamic adaptation of UIs based on spatial constraints. We present results from a walkthrough demonstration, which highlight the potential of XRgonomics to make ergonomics metrics accessible during the design and development of 3D UIs. Finally, we discuss how the toolkit may address design goals beyond ergonomics.
手臂不适是交叉现实应用中涉及长时间空中交互的常见问题。由于缺乏3D用户界面设计的工具和指导方针,解决这个问题很困难。因此,我们提出了一种方法,通过估算用户环境中每个可到达位置的交互成本,使现有的人体工程学指标在设计期间可供创建者使用。我们提出了XRgonomics,一个可视化交互成本的工具包,并使其在运行时可用,允许创建者识别UI位置,优化用户的舒适度。两个场景展示了该工具包如何支持3D UI设计和基于空间约束的UI动态调整。我们展示了一个演示演示的结果,它突出了XRgonomics在设计和开发3D ui期间使人体工程学指标可访问的潜力。最后,我们讨论了该工具包如何解决人体工程学之外的设计目标。
{"title":"XRgonomics: Facilitating the Creation of Ergonomic 3D Interfaces","authors":"João Marcelo Evangelista Belo, A. Feit, Tiare M. Feuchtner, Kaj Grønbæk","doi":"10.1145/3411764.3445349","DOIUrl":"https://doi.org/10.1145/3411764.3445349","url":null,"abstract":"Arm discomfort is a common issue in Cross Reality applications involving prolonged mid-air interaction. Solving this problem is difficult because of the lack of tools and guidelines for 3D user interface design. Therefore, we propose a method to make existing ergonomic metrics available to creators during design by estimating the interaction cost at each reachable position in the user’s environment. We present XRgonomics, a toolkit to visualize the interaction cost and make it available at runtime, allowing creators to identify UI positions that optimize users’ comfort. Two scenarios show how the toolkit can support 3D UI design and dynamic adaptation of UIs based on spatial constraints. We present results from a walkthrough demonstration, which highlight the potential of XRgonomics to make ergonomics metrics accessible during the design and development of 3D UIs. Finally, we discuss how the toolkit may address design goals beyond ergonomics.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83595838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Gesture Knitter: A Hand Gesture Design Tool for Head-Mounted Mixed Reality Applications 手势编织:一个用于头戴式混合现实应用的手势设计工具
Pub Date : 2021-05-06 DOI: 10.1145/3411764.3445766
George B. Mo, John J. Dudley, P. Kristensson
Hand gestures are a natural and expressive input method enabled by modern mixed reality headsets. However, it remains challenging for developers to create custom gestures for their applications. Conventional strategies to bespoke gesture recognition involve either hand-crafting or data-intensive deep-learning. Neither approach is well suited for rapid prototyping of new interactions. This paper introduces a flexible and efficient alternative approach for constructing hand gestures. We present Gesture Knitter: a design tool for creating custom gesture recognizers with minimal training data. Gesture Knitter allows the specification of gesture primitives that can then be combined to create more complex gestures using a visual declarative script. Designers can build custom recognizers by declaring them from scratch or by providing a demonstration that is automatically decoded into its primitive components. Our developer study shows that Gesture Knitter achieves high recognition accuracy despite minimal training data and delivers an expressive and creative design experience.
手势是现代混合现实耳机支持的一种自然而富有表现力的输入法。然而,对于开发人员来说,为他们的应用程序创建自定义手势仍然是一个挑战。定制手势识别的传统策略包括手工制作或数据密集型深度学习。这两种方法都不适合新交互的快速原型。本文介绍了一种灵活有效的构建手势的替代方法。我们现在的手势编织:一个设计工具,用于创建自定义手势识别器与最小的训练数据。Gesture Knitter允许指定手势原语,然后可以使用可视化声明性脚本将这些原语组合起来创建更复杂的手势。设计人员可以通过从头开始声明或提供自动解码为其基本组件的演示来构建自定义识别器。我们的开发者研究表明,尽管训练数据很少,Gesture Knitter仍然实现了很高的识别精度,并提供了富有表现力和创造性的设计体验。
{"title":"Gesture Knitter: A Hand Gesture Design Tool for Head-Mounted Mixed Reality Applications","authors":"George B. Mo, John J. Dudley, P. Kristensson","doi":"10.1145/3411764.3445766","DOIUrl":"https://doi.org/10.1145/3411764.3445766","url":null,"abstract":"Hand gestures are a natural and expressive input method enabled by modern mixed reality headsets. However, it remains challenging for developers to create custom gestures for their applications. Conventional strategies to bespoke gesture recognition involve either hand-crafting or data-intensive deep-learning. Neither approach is well suited for rapid prototyping of new interactions. This paper introduces a flexible and efficient alternative approach for constructing hand gestures. We present Gesture Knitter: a design tool for creating custom gesture recognizers with minimal training data. Gesture Knitter allows the specification of gesture primitives that can then be combined to create more complex gestures using a visual declarative script. Designers can build custom recognizers by declaring them from scratch or by providing a demonstration that is automatically decoded into its primitive components. Our developer study shows that Gesture Knitter achieves high recognition accuracy despite minimal training data and delivers an expressive and creative design experience.","PeriodicalId":20451,"journal":{"name":"Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79914741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
期刊
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1