首页 > 最新文献

Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems最新文献

英文 中文
Controlling Maximal Voluntary Contraction of the Upper Limb Muscles by Facial Electrical Stimulation 通过面部电刺激控制上肢肌肉的最大自主收缩
Pub Date : 2018-04-21 DOI: 10.1145/3173574.3173968
Arinobu Niijima, T. Isezaki, Ryosuke Aoki, Tomoki Watanabe, Tomohiro Yamada
In this paper, we propose to use facial electrical stimulation to control maximal voluntary contraction (MVC) of the upper limbs. The method is based on a body mechanism in which the contraction of the masseter muscles enhances MVC of the limb muscles. Facial electrical stimulation is applied to the masseter muscles and the lips. The former is to enhance the MVC by causing involuntary contraction of the masseter muscles, and the latter is to suppress the MVC by interfering with voluntary contraction of the masseter muscles. In a user study, we used electromyography sensors on the upper limbs to evaluate the effects of the facial electrical stimulation on the MVC of the upper limbs. The experimental results show that the MVC was controlled by the facial electrical stimulation. We assume that the proposed method is useful for sports athletes because the MVC is linked to sports performance.
在本文中,我们建议使用面部电刺激来控制上肢的最大自愿收缩(MVC)。该方法是基于一个身体机制,其中咬肌的收缩增强肢体肌肉的MVC。面部电刺激应用于咬肌和嘴唇。前者是通过引起咬肌的不随意收缩来增强MVC,后者是通过干扰咬肌的随意收缩来抑制MVC。在一项用户研究中,我们使用上肢肌电传感器来评估面部电刺激对上肢MVC的影响。实验结果表明,面部电刺激可以控制MVC。我们假设所提出的方法对运动运动员很有用,因为MVC与运动表现相关联。
{"title":"Controlling Maximal Voluntary Contraction of the Upper Limb Muscles by Facial Electrical Stimulation","authors":"Arinobu Niijima, T. Isezaki, Ryosuke Aoki, Tomoki Watanabe, Tomohiro Yamada","doi":"10.1145/3173574.3173968","DOIUrl":"https://doi.org/10.1145/3173574.3173968","url":null,"abstract":"In this paper, we propose to use facial electrical stimulation to control maximal voluntary contraction (MVC) of the upper limbs. The method is based on a body mechanism in which the contraction of the masseter muscles enhances MVC of the limb muscles. Facial electrical stimulation is applied to the masseter muscles and the lips. The former is to enhance the MVC by causing involuntary contraction of the masseter muscles, and the latter is to suppress the MVC by interfering with voluntary contraction of the masseter muscles. In a user study, we used electromyography sensors on the upper limbs to evaluate the effects of the facial electrical stimulation on the MVC of the upper limbs. The experimental results show that the MVC was controlled by the facial electrical stimulation. We assume that the proposed method is useful for sports athletes because the MVC is linked to sports performance.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73723545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Considering Agency and Data Granularity in the Design of Visualization Tools 在可视化工具设计中考虑代理和数据粒度
Pub Date : 2018-04-21 DOI: 10.1145/3173574.3174212
G. Méndez, Miguel A. Nacenta, Uta Hinrichs
Previous research has identified trade-offs when it comes to designing visualization tools. While constructive "bottom-up' tools promote a hands-on, user-driven design process that enables a deep understanding and control of the visual mapping, automated tools are more efficient and allow people to rapidly explore complex alternative designs, often at the cost of transparency. We investigate how to design visualization tools that support a user-driven, transparent design process while enabling efficiency and automation, through a series of design workshops that looked at how both visualization experts and novices approach this problem. Participants produced a variety of solutions that range from example-based approaches expanding constructive visualization to solutions in which the visualization tool infers solutions on behalf of the designer, e.g., based on data attributes. On a higher level, these findings highlight agency and granularity as dimensions that can guide the design of visualization tools in this space.
先前的研究已经确定了在设计可视化工具时的权衡。虽然建设性的“自下而上”工具促进了动手,用户驱动的设计过程,使人们能够深入理解和控制视觉映射,但自动化工具更有效,并且允许人们快速探索复杂的替代设计,通常以透明度为代价。我们研究了如何设计可视化工具,以支持用户驱动、透明的设计过程,同时实现效率和自动化,通过一系列的设计研讨会,研究了可视化专家和新手如何处理这个问题。参与者提出了各种各样的解决方案,从基于示例的方法扩展建设性可视化到可视化工具代表设计师推断解决方案(例如,基于数据属性)的解决方案。在更高的层次上,这些发现突出了代理和粒度作为维度,可以指导该领域可视化工具的设计。
{"title":"Considering Agency and Data Granularity in the Design of Visualization Tools","authors":"G. Méndez, Miguel A. Nacenta, Uta Hinrichs","doi":"10.1145/3173574.3174212","DOIUrl":"https://doi.org/10.1145/3173574.3174212","url":null,"abstract":"Previous research has identified trade-offs when it comes to designing visualization tools. While constructive \"bottom-up' tools promote a hands-on, user-driven design process that enables a deep understanding and control of the visual mapping, automated tools are more efficient and allow people to rapidly explore complex alternative designs, often at the cost of transparency. We investigate how to design visualization tools that support a user-driven, transparent design process while enabling efficiency and automation, through a series of design workshops that looked at how both visualization experts and novices approach this problem. Participants produced a variety of solutions that range from example-based approaches expanding constructive visualization to solutions in which the visualization tool infers solutions on behalf of the designer, e.g., based on data attributes. On a higher level, these findings highlight agency and granularity as dimensions that can guide the design of visualization tools in this space.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74882520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Looks Can Be Deceiving: Using Gaze Visualisation to Predict and Mislead Opponents in Strategic Gameplay 外观可能具有欺骗性:在战略玩法中使用凝视视觉来预测和误导对手
Pub Date : 2018-04-21 DOI: 10.1145/3173574.3173835
Joshua Newn, Fraser Allison, Eduardo Velloso, F. Vetere
In competitive co-located gameplay, players use their opponents' gaze to make predictions about their plans while simultaneously managing their own gaze to avoid giving away their plans. This socially competitive dimension is lacking in most online games, where players are out of sight of each other. We conducted a lab study using a strategic online game; finding that (1) players are better at discerning their opponent's plans when shown a live visualisation of the opponent's gaze, and (2) players who are aware that their gaze is tracked will manipulate their gaze to keep their intentions hidden. We describe the strategies that players employed, to various degrees of success, to deceive their opponent through their gaze behaviour. This gaze-based deception adds an effortful and challenging aspect to the competition. Lastly, we discuss the various implications of our findings and its applicability for future game design.
在竞争共存的玩法中,玩家利用对手的目光来预测自己的计划,同时控制自己的目光,避免泄露自己的计划。这种社交竞争维度在大多数在线游戏中都是缺乏的,因为玩家之间看不到彼此。我们使用一款战略网络游戏进行了一项实验室研究;发现(1)当玩家看到对手的视线时,他们更善于辨别对手的计划;(2)意识到自己的视线被跟踪的玩家会操纵自己的视线,以隐藏自己的意图。我们描述了玩家通过凝视行为欺骗对手的策略,这些策略取得了不同程度的成功。这种以目光为基础的欺骗增加了竞争的难度和挑战性。最后,我们将讨论这些发现的各种含义及其对未来游戏设计的适用性。
{"title":"Looks Can Be Deceiving: Using Gaze Visualisation to Predict and Mislead Opponents in Strategic Gameplay","authors":"Joshua Newn, Fraser Allison, Eduardo Velloso, F. Vetere","doi":"10.1145/3173574.3173835","DOIUrl":"https://doi.org/10.1145/3173574.3173835","url":null,"abstract":"In competitive co-located gameplay, players use their opponents' gaze to make predictions about their plans while simultaneously managing their own gaze to avoid giving away their plans. This socially competitive dimension is lacking in most online games, where players are out of sight of each other. We conducted a lab study using a strategic online game; finding that (1) players are better at discerning their opponent's plans when shown a live visualisation of the opponent's gaze, and (2) players who are aware that their gaze is tracked will manipulate their gaze to keep their intentions hidden. We describe the strategies that players employed, to various degrees of success, to deceive their opponent through their gaze behaviour. This gaze-based deception adds an effortful and challenging aspect to the competition. Lastly, we discuss the various implications of our findings and its applicability for future game design.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72939611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
How Far Is Up?: Bringing the Counterpointed Triad Technique to Digital Storybook Apps 有多高?将对位三和弦技术引入数字故事书应用程序
Pub Date : 2018-04-21 DOI: 10.1145/3173574.3174093
B. Sargeant, F. Mueller
Interactive storybooks, such as those available on the iPad, offer multiple ways to convey a story, mostly through visual, textual and audio content. How to effectively deliver this combination of content so that it supports positive social and educational development in pre-literate children is relatively underexplored. In order to address this issue we introduce the "Counterpointed Triad Technique". Drawing from traditional literary theory we design visual, textual and audio content that each conveys different aspects of a story. We explore the use of this technique through a storybook we designed ourselves called "How Far Is Up?". A study involving 26 kindergarten children shows that "How Far Is Up?" can engage pre-literature children while they are reading alone and also when they are reading with an adult. Based on our craft knowledge and study findings, we present a set of design strategies that aim to provide designers with practical guidance on how to create engaging interactive digital storybooks.
交互式故事书,比如iPad上的那些,提供了多种方式来传达故事,主要是通过视觉、文本和音频内容。如何有效地提供这种内容组合,以支持识字前儿童积极的社会和教育发展,这方面的探索相对较少。为了解决这个问题,我们引入了“对位三和弦技术”。借鉴传统文学理论,我们设计了视觉、文字和音频内容,每个内容都传达了故事的不同方面。我们通过自己设计的故事书《有多高?》来探索这种技巧的使用。一项涉及26名幼儿园儿童的研究表明,“有多远?”可以吸引未读过文学的儿童,无论他们是独自阅读还是与成年人一起阅读。基于我们的工艺知识和研究结果,我们提出了一套设计策略,旨在为设计师提供如何创建引人入胜的交互式数字故事书的实用指导。
{"title":"How Far Is Up?: Bringing the Counterpointed Triad Technique to Digital Storybook Apps","authors":"B. Sargeant, F. Mueller","doi":"10.1145/3173574.3174093","DOIUrl":"https://doi.org/10.1145/3173574.3174093","url":null,"abstract":"Interactive storybooks, such as those available on the iPad, offer multiple ways to convey a story, mostly through visual, textual and audio content. How to effectively deliver this combination of content so that it supports positive social and educational development in pre-literate children is relatively underexplored. In order to address this issue we introduce the \"Counterpointed Triad Technique\". Drawing from traditional literary theory we design visual, textual and audio content that each conveys different aspects of a story. We explore the use of this technique through a storybook we designed ourselves called \"How Far Is Up?\". A study involving 26 kindergarten children shows that \"How Far Is Up?\" can engage pre-literature children while they are reading alone and also when they are reading with an adult. Based on our craft knowledge and study findings, we present a set of design strategies that aim to provide designers with practical guidance on how to create engaging interactive digital storybooks.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78202714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
In the Eye of the Student: An Intangible Cultural Heritage Experience, with a Human-Computer Interaction Twist 在学生的眼中:非物质文化遗产体验,人机交互的扭曲
Pub Date : 2018-04-21 DOI: 10.1145/3173574.3173864
Danilo Giglitto, Shaimaa Y. Lazem, Anne Preston
We critically engage with CHI communities emerging outside the global North (ArabHCI and AfriCHI) to explore how participation is configured and enacted within socio-cultural and political contexts fundamentally different from Western societies. We contribute to recent discussions about postcolonialism and decolonization of HCI by focusing on non-Western future technology designers. Our lens was a course designed to engage Egyptian students with a local yet culturally-distant community to design applications for documenting intangible heritage. Through an action research, the instructors reflect on selected students' activities. Despite deploying a flexible learning curriculum that encourages greater autonomy, the students perceived themselves with less agency than other institutional stakeholders involved in the project. Further, some of them struggled to empathize with the community as the impact of the cultural differences on configuring participation was profound. We discuss the implications of the findings on HCI education and in international cross-cultural design projects.
我们与全球北方以外的CHI社区(ArabHCI和AfriCHI)进行批判性接触,探索如何在与西方社会根本不同的社会文化和政治背景下配置和实施参与。我们通过关注非西方未来技术设计师,为最近关于后殖民主义和非殖民化人机交互的讨论做出贡献。我们的镜头是一门课程,旨在让埃及学生与当地但文化遥远的社区一起设计记录非物质遗产的应用程序。通过行动研究,教师对选定的学生的活动进行反思。尽管部署了灵活的学习课程,鼓励更大的自主权,但学生们认为自己比参与项目的其他机构利益相关者拥有更少的代理。此外,他们中的一些人很难与社区产生共鸣,因为文化差异对配置参与的影响是深远的。我们讨论了研究结果对HCI教育和国际跨文化设计项目的影响。
{"title":"In the Eye of the Student: An Intangible Cultural Heritage Experience, with a Human-Computer Interaction Twist","authors":"Danilo Giglitto, Shaimaa Y. Lazem, Anne Preston","doi":"10.1145/3173574.3173864","DOIUrl":"https://doi.org/10.1145/3173574.3173864","url":null,"abstract":"We critically engage with CHI communities emerging outside the global North (ArabHCI and AfriCHI) to explore how participation is configured and enacted within socio-cultural and political contexts fundamentally different from Western societies. We contribute to recent discussions about postcolonialism and decolonization of HCI by focusing on non-Western future technology designers. Our lens was a course designed to engage Egyptian students with a local yet culturally-distant community to design applications for documenting intangible heritage. Through an action research, the instructors reflect on selected students' activities. Despite deploying a flexible learning curriculum that encourages greater autonomy, the students perceived themselves with less agency than other institutional stakeholders involved in the project. Further, some of them struggled to empathize with the community as the impact of the cultural differences on configuring participation was profound. We discuss the implications of the findings on HCI education and in international cross-cultural design projects.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77274920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
From Scanning Brains to Reading Minds: Talking to Engineers about Brain-Computer Interface 从扫描大脑到阅读思想:与工程师谈论脑机接口
Pub Date : 2018-04-21 DOI: 10.1145/3173574.3173897
Nick Merrill, J. Chuang
We presented software engineers in the San Francisco Bay Area with a working brain-computer interface (BCI) to surface the narratives and anxieties around these devices among technical practitioners. Despite this group's heterogeneous beliefs about the exact nature of the mind, we find a shared belief that the contents of the mind will someday be "read' or "decoded' by machines. Our findings help illuminate BCI's imagined futures among engineers. We highlight opportunities for researchers to involve themselves preemptively in this nascent space of intimate biosensing devices, suggesting our findings' relevance to long-term futures of privacy and cybersecurity.
我们向旧金山湾区的软件工程师展示了一个工作的脑机接口(BCI),以使技术从业者对这些设备的叙述和焦虑浮出水面。尽管这群人对心灵的确切本质有不同的看法,但我们发现一个共同的信念,即心灵的内容总有一天会被机器“读取”或“解码”。我们的发现有助于阐明BCI在工程师中的未来设想。我们强调了研究人员在这个新兴的私密生物传感设备领域抢先参与的机会,这表明我们的发现与隐私和网络安全的长期未来相关。
{"title":"From Scanning Brains to Reading Minds: Talking to Engineers about Brain-Computer Interface","authors":"Nick Merrill, J. Chuang","doi":"10.1145/3173574.3173897","DOIUrl":"https://doi.org/10.1145/3173574.3173897","url":null,"abstract":"We presented software engineers in the San Francisco Bay Area with a working brain-computer interface (BCI) to surface the narratives and anxieties around these devices among technical practitioners. Despite this group's heterogeneous beliefs about the exact nature of the mind, we find a shared belief that the contents of the mind will someday be \"read' or \"decoded' by machines. Our findings help illuminate BCI's imagined futures among engineers. We highlight opportunities for researchers to involve themselves preemptively in this nascent space of intimate biosensing devices, suggesting our findings' relevance to long-term futures of privacy and cybersecurity.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76157552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Full-Body Ownership Illusion Can Change Our Emotion 拥有全身的错觉可以改变我们的情绪
Pub Date : 2018-04-21 DOI: 10.1145/3173574.3174175
Joohee Jun, Myeongul Jung, So-yeon Kim, K. Kim
Recent advances in technology have allowed users to experience an illusory feeling of full body ownership of a virtual avatar. Such virtual embodiment has the power to elicit perceptual, behavioral or cognitive changes related to oneself, however, its emotional effects have not yet been rigorously examined. To address this issue, we investigated emotional changes as a function of the level of the illusion (Study 1) and whether changes in the facial expression of a virtual avatar can modulate the effects of the illusion (Study 2). The results revealed that stronger illusory feelings of full body ownership were induced in the synchronous condition, and participants reported higher valence in the synchronous condition in both Studies 1 and 2. The results from Study 2 suggested that the facial expression of a virtual avatar can modulate participants' emotions. We discuss the prospects of the development of therapeutic techniques using such illusions to help people with emotion-related symptoms such as depression and social anxiety.
最近的技术进步已经让用户体验到一种虚拟化身的全身所有权的虚幻感觉。这种虚拟化身具有引发与自己相关的感知、行为或认知变化的能力,然而,其情感影响尚未得到严格的检验。为了解决这个问题,我们研究了情绪变化作为错觉水平的函数(研究1),以及虚拟化身面部表情的变化是否可以调节错觉的影响(研究2)。结果显示,在同步条件下,更强的全身所有权错觉被诱导,参与者在研究1和研究2中都报告了同步条件下更高的效价。研究2的结果表明,虚拟化身的面部表情可以调节参与者的情绪。我们讨论了利用这种幻觉治疗技术的发展前景,以帮助人们与情绪相关的症状,如抑郁和社交焦虑。
{"title":"Full-Body Ownership Illusion Can Change Our Emotion","authors":"Joohee Jun, Myeongul Jung, So-yeon Kim, K. Kim","doi":"10.1145/3173574.3174175","DOIUrl":"https://doi.org/10.1145/3173574.3174175","url":null,"abstract":"Recent advances in technology have allowed users to experience an illusory feeling of full body ownership of a virtual avatar. Such virtual embodiment has the power to elicit perceptual, behavioral or cognitive changes related to oneself, however, its emotional effects have not yet been rigorously examined. To address this issue, we investigated emotional changes as a function of the level of the illusion (Study 1) and whether changes in the facial expression of a virtual avatar can modulate the effects of the illusion (Study 2). The results revealed that stronger illusory feelings of full body ownership were induced in the synchronous condition, and participants reported higher valence in the synchronous condition in both Studies 1 and 2. The results from Study 2 suggested that the facial expression of a virtual avatar can modulate participants' emotions. We discuss the prospects of the development of therapeutic techniques using such illusions to help people with emotion-related symptoms such as depression and social anxiety.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77492429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Uncertainty Visualization Influences how Humans Aggregate Discrepant Information 不确定性可视化影响人类如何聚合差异信息
Pub Date : 2018-04-21 DOI: 10.1145/3173574.3174079
Miriam Greis, Aditi Joshi, Ken Singer, A. Schmidt, Tonja Machulla
The number of sensors in our surroundings that provide the same information steadily increases. Since sensing is prone to errors, sensors may disagree. For example, a GPS-based tracker on the phone and a sensor on the bike wheel may provide discrepant estimates on traveled distance. This poses a user dilemma, namely how to reconcile the conflicting information into one estimate. We investigated whether visualizing the uncertainty associated with sensor measurements improves the quality of users' inference. We tested four visualizations with increasingly detailed representation of uncertainty. Our study repeatedly presented two sensor measurements with varying degrees of inconsistency to participants who indicated their best guess of the "true" value. We found that uncertainty information improves users' estimates, especially if sensors differ largely in their associated variability. Improvements were larger for information-rich visualizations. Based on our findings, we provide an interactive tool to select the optimal visualization for displaying conflicting information.
我们周围提供相同信息的传感器数量稳步增加。由于传感容易出错,传感器可能不同意。例如,手机上基于gps的跟踪器和自行车车轮上的传感器可能会提供不同的行进距离估计。这给用户带来了一个困境,即如何将冲突的信息调和成一个估计。我们研究了可视化与传感器测量相关的不确定性是否能提高用户推理的质量。我们测试了四种越来越详细地表示不确定性的可视化方法。我们的研究反复提出了两种不同程度的不一致的传感器测量,参与者表示他们对“真实”值的最佳猜测。我们发现,不确定性信息提高了用户的估计,特别是如果传感器在其相关变异性方面差异很大。对于信息丰富的可视化,改进更大。基于我们的发现,我们提供了一个交互式工具来选择显示冲突信息的最佳可视化。
{"title":"Uncertainty Visualization Influences how Humans Aggregate Discrepant Information","authors":"Miriam Greis, Aditi Joshi, Ken Singer, A. Schmidt, Tonja Machulla","doi":"10.1145/3173574.3174079","DOIUrl":"https://doi.org/10.1145/3173574.3174079","url":null,"abstract":"The number of sensors in our surroundings that provide the same information steadily increases. Since sensing is prone to errors, sensors may disagree. For example, a GPS-based tracker on the phone and a sensor on the bike wheel may provide discrepant estimates on traveled distance. This poses a user dilemma, namely how to reconcile the conflicting information into one estimate. We investigated whether visualizing the uncertainty associated with sensor measurements improves the quality of users' inference. We tested four visualizations with increasingly detailed representation of uncertainty. Our study repeatedly presented two sensor measurements with varying degrees of inconsistency to participants who indicated their best guess of the \"true\" value. We found that uncertainty information improves users' estimates, especially if sensors differ largely in their associated variability. Improvements were larger for information-rich visualizations. Based on our findings, we provide an interactive tool to select the optimal visualization for displaying conflicting information.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77609684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Presenting The Accessory Approach: A Start-up's Journey Towards Designing An Engaging Fall Detection Device 展示配件方法:一个初创公司设计一个引人入胜的跌倒检测设备的旅程
Pub Date : 2018-04-21 DOI: 10.1145/3173574.3174133
Trine Møller
This paper explores a design experiment concerning the development of a personalised and engaging wearable fall detection device customised for care home residents. The design experiment focuses on a start-up company's design process, which utilises a new design approach, which I name the accessory approach, to accommodate given cultural fit purposes of a wearer. Influenced by accessory design, that belong neither to fashion nor jewellery, the accessory approach is a way of designing wearables that involve both functional and expressive qualities including the wearer's physical, psychological and social needs. The accessory approach is proven to enable first hand insight of the wearer's preferences, leading to in-depth knowledge and enhanced iterative processes, which support the design of a customised device. This type of knowledge is important for the HCI community as it brings accessory design disciplines into play when wanting to understand and design for individual needs, creating engaging wearables design.
本文探讨了一项设计实验,涉及为养老院居民定制的个性化和引人入胜的可穿戴跌倒检测设备的开发。设计实验的重点是一家初创公司的设计过程,该公司采用了一种新的设计方法,我称之为配饰方法,以适应佩戴者的特定文化契合目的。配饰设计既不属于时尚,也不属于珠宝,受配饰设计的影响,配饰设计是一种设计可穿戴设备的方式,涉及功能和表达品质,包括佩戴者的身体、心理和社会需求。事实证明,配件方法可以直接了解佩戴者的偏好,从而获得深入的知识和增强的迭代过程,从而支持定制设备的设计。这种类型的知识对于HCI社区非常重要,因为当想要了解和设计个人需求时,它将配件设计学科带入其中,创造出引人入胜的可穿戴设备设计。
{"title":"Presenting The Accessory Approach: A Start-up's Journey Towards Designing An Engaging Fall Detection Device","authors":"Trine Møller","doi":"10.1145/3173574.3174133","DOIUrl":"https://doi.org/10.1145/3173574.3174133","url":null,"abstract":"This paper explores a design experiment concerning the development of a personalised and engaging wearable fall detection device customised for care home residents. The design experiment focuses on a start-up company's design process, which utilises a new design approach, which I name the accessory approach, to accommodate given cultural fit purposes of a wearer. Influenced by accessory design, that belong neither to fashion nor jewellery, the accessory approach is a way of designing wearables that involve both functional and expressive qualities including the wearer's physical, psychological and social needs. The accessory approach is proven to enable first hand insight of the wearer's preferences, leading to in-depth knowledge and enhanced iterative processes, which support the design of a customised device. This type of knowledge is important for the HCI community as it brings accessory design disciplines into play when wanting to understand and design for individual needs, creating engaging wearables design.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79289714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Training Person-Specific Gaze Estimators from User Interactions with Multiple Devices 从用户与多个设备的交互中训练特定于人的注视估计器
Pub Date : 2018-04-21 DOI: 10.1145/3173574.3174198
Xucong Zhang, Michael Xuelin Huang, Yusuke Sugano, A. Bulling
Learning-based gaze estimation has significant potential to enable attentive user interfaces and gaze-based interaction on the billions of camera-equipped handheld devices and ambient displays. While training accurate person- and device-independent gaze estimators remains challenging, person-specific training is feasible but requires tedious data collection for each target device. To address these limitations, we present the first method to train person-specific gaze estimators across multiple devices. At the core of our method is a single convolutional neural network with shared feature extraction layers and device-specific branches that we train from face images and corresponding on-screen gaze locations. Detailed evaluations on a new dataset of interactions with five common devices (mobile phone, tablet, laptop, desktop computer, smart TV) and three common applications (mobile game, text editing, media center) demonstrate the significant potential of cross-device training. We further explore training with gaze locations derived from natural interactions, such as mouse or touch input.
基于学习的凝视估计具有巨大的潜力,可以在数十亿配备摄像头的手持设备和环境显示器上实现专注的用户界面和基于凝视的交互。虽然训练准确的独立于人和设备的注视估计器仍然具有挑战性,但针对个人的训练是可行的,但需要为每个目标设备收集繁琐的数据。为了解决这些限制,我们提出了第一种跨多个设备训练特定于人的注视估计器的方法。我们方法的核心是一个单一的卷积神经网络,它具有共享的特征提取层和特定于设备的分支,我们从人脸图像和相应的屏幕注视位置中训练这些分支。对五种常见设备(手机、平板电脑、笔记本电脑、台式电脑、智能电视)和三种常见应用程序(手机游戏、文本编辑、媒体中心)交互的新数据集的详细评估显示了跨设备培训的巨大潜力。我们进一步探索了来自自然交互(如鼠标或触摸输入)的凝视位置的训练。
{"title":"Training Person-Specific Gaze Estimators from User Interactions with Multiple Devices","authors":"Xucong Zhang, Michael Xuelin Huang, Yusuke Sugano, A. Bulling","doi":"10.1145/3173574.3174198","DOIUrl":"https://doi.org/10.1145/3173574.3174198","url":null,"abstract":"Learning-based gaze estimation has significant potential to enable attentive user interfaces and gaze-based interaction on the billions of camera-equipped handheld devices and ambient displays. While training accurate person- and device-independent gaze estimators remains challenging, person-specific training is feasible but requires tedious data collection for each target device. To address these limitations, we present the first method to train person-specific gaze estimators across multiple devices. At the core of our method is a single convolutional neural network with shared feature extraction layers and device-specific branches that we train from face images and corresponding on-screen gaze locations. Detailed evaluations on a new dataset of interactions with five common devices (mobile phone, tablet, laptop, desktop computer, smart TV) and three common applications (mobile game, text editing, media center) demonstrate the significant potential of cross-device training. We further explore training with gaze locations derived from natural interactions, such as mouse or touch input.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81603553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
期刊
Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1