首页 > 最新文献

Proceedings of the SIGCHI Conference on Human Factors in Computing Systems最新文献

英文 中文
Session details: Whole body sensing and interaction 会议细节:全身感应与互动
Otmar Hilliges
{"title":"Session details: Whole body sensing and interaction","authors":"Otmar Hilliges","doi":"10.1145/3250973","DOIUrl":"https://doi.org/10.1145/3250973","url":null,"abstract":"","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89505223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Under pressure: sensing stress of computer users 压力下:感知计算机用户的压力
Pub Date : 2014-04-26 DOI: 10.1145/2556288.2557165
Javier Hernández, P. Paredes, A. Roseway, M. Czerwinski
Recognizing when computer users are stressed can help reduce their frustration and prevent a large variety of negative health conditions associated with chronic stress. However, measuring stress non-invasively and continuously at work remains an open challenge. This work explores the possibility of using a pressure-sensitive keyboard and a capacitive mouse to discriminate between stressful and relaxed conditions in a laboratory study. During a 30 minute session, 24 participants performed several computerized tasks consisting of expressive writing, text transcription, and mouse clicking. During the stressful conditions, the large majority of the participants showed significantly increased typing pressure (>79% of the participants) and more contact with the surface of the mouse (75% of the participants). We discuss the potential implications of this work and provide recommendations for future work.
认识到电脑用户的压力可以帮助减少他们的挫败感,并防止与慢性压力相关的各种负面健康状况。然而,在工作中非侵入性地持续测量压力仍然是一个开放的挑战。这项工作探索了在实验室研究中使用压敏键盘和电容鼠标来区分压力和放松条件的可能性。在30分钟的训练中,24名参与者完成了几项计算机化的任务,包括表达性写作、文本转录和鼠标点击。在压力条件下,绝大多数参与者表现出明显增加的打字压力(>79%的参与者)和更多的接触鼠标表面(75%的参与者)。我们讨论了这项工作的潜在意义,并对未来的工作提出了建议。
{"title":"Under pressure: sensing stress of computer users","authors":"Javier Hernández, P. Paredes, A. Roseway, M. Czerwinski","doi":"10.1145/2556288.2557165","DOIUrl":"https://doi.org/10.1145/2556288.2557165","url":null,"abstract":"Recognizing when computer users are stressed can help reduce their frustration and prevent a large variety of negative health conditions associated with chronic stress. However, measuring stress non-invasively and continuously at work remains an open challenge. This work explores the possibility of using a pressure-sensitive keyboard and a capacitive mouse to discriminate between stressful and relaxed conditions in a laboratory study. During a 30 minute session, 24 participants performed several computerized tasks consisting of expressive writing, text transcription, and mouse clicking. During the stressful conditions, the large majority of the participants showed significantly increased typing pressure (>79% of the participants) and more contact with the surface of the mouse (75% of the participants). We discuss the potential implications of this work and provide recommendations for future work.","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"80 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88179357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 168
Session details: Learning and games 会议细节:学习和游戏
A. Ogan
{"title":"Session details: Learning and games","authors":"A. Ogan","doi":"10.1145/3250971","DOIUrl":"https://doi.org/10.1145/3250971","url":null,"abstract":"","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"58 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85794556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The usability of CommandMaps in realistic tasks CommandMaps在实际任务中的可用性
Pub Date : 2014-04-26 DOI: 10.1145/2556288.2556976
Joey Scarr, A. Cockburn, C. Gutwin, Andrea Bunt, Jared Cechanowicz
CommandMaps are a promising interface technique that flattens command hierarchies and exploits human spatial memory to provide rapid access to commands. CommandMaps have performed favorably in constrained cued-selection studies, but have not yet been tested in the context of real tasks. In this paper we present two real-world implementations of CommandMaps: one for Microsoft Word and one for an image editing program called Pinta. We use these as our experimental platforms in two experiments. In the first, we show that CommandMaps demonstrate performance and subjective advantages in a realistic task. In the second, we observe naturalistic use of CommandMaps over the course of a week, and gather qualitative data from interviews, questionnaires, and conversations. Our results provide substantial insight into users' reactions to CommandMaps, showing that they are positively received by users and allowing us to provide concrete recommendations to designers regarding when and how they should be implemented in real applications.
CommandMaps是一种很有前途的接口技术,它使命令层次结构扁平化,并利用人的空间记忆来提供对命令的快速访问。commandmap在受限提示选择研究中表现良好,但尚未在实际任务中进行测试。在本文中,我们介绍了CommandMaps在现实世界中的两个实现:一个用于Microsoft Word,另一个用于名为Pinta的图像编辑程序。我们在两个实验中使用这些作为实验平台。在第一篇文章中,我们展示了CommandMaps在实际任务中的性能和主观优势。在第二部分中,我们在一周的时间里观察对CommandMaps的自然使用,并从访谈、问卷调查和对话中收集定性数据。我们的结果为用户对CommandMaps的反应提供了实质性的见解,表明用户积极地接受了它们,并允许我们向设计人员提供有关何时以及如何在实际应用程序中实现它们的具体建议。
{"title":"The usability of CommandMaps in realistic tasks","authors":"Joey Scarr, A. Cockburn, C. Gutwin, Andrea Bunt, Jared Cechanowicz","doi":"10.1145/2556288.2556976","DOIUrl":"https://doi.org/10.1145/2556288.2556976","url":null,"abstract":"CommandMaps are a promising interface technique that flattens command hierarchies and exploits human spatial memory to provide rapid access to commands. CommandMaps have performed favorably in constrained cued-selection studies, but have not yet been tested in the context of real tasks. In this paper we present two real-world implementations of CommandMaps: one for Microsoft Word and one for an image editing program called Pinta. We use these as our experimental platforms in two experiments. In the first, we show that CommandMaps demonstrate performance and subjective advantages in a realistic task. In the second, we observe naturalistic use of CommandMaps over the course of a week, and gather qualitative data from interviews, questionnaires, and conversations. Our results provide substantial insight into users' reactions to CommandMaps, showing that they are positively received by users and allowing us to provide concrete recommendations to designers regarding when and how they should be implemented in real applications.","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"26 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85827908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Experimental evaluation of user interfaces for visual indoor navigation 视觉室内导航用户界面的实验评价
Pub Date : 2014-04-26 DOI: 10.1145/2556288.2557003
Andreas Möller, M. Kranz, Stefan Diewald, L. Roalter, Robert Huitl, T. Stockinger, Marion Koelle, Patrick Lindemann
Mobile location recognition by capturing images of the environment (visual localization) is a promising technique for indoor navigation in arbitrary surroundings. However, it has barely been investigated so far how the user interface (UI) can cope with the challenges of the vision-based localization technique, such as varying quality of the query images. We implemented a novel UI for visual localization, consisting of Virtual Reality (VR) and Augmented Reality (AR) views that actively communicate and ensure localization accuracy. If necessary, the system encourages the user to point the smartphone at distinctive regions to improve localization quality. We evaluated the UI in a experimental navigation task with a prototype, informed by initial evaluation results using design mockups. We found that VR can contribute to efficient and effective indoor navigation even at unreliable location and orientation accuracy. We discuss identified challenges and share lessons learned as recommendations for future work.
通过捕捉环境图像进行移动定位(视觉定位)是一种很有前途的用于任意环境下室内导航的技术。然而,迄今为止,很少有人研究用户界面(UI)如何应对基于视觉的定位技术的挑战,例如查询图像质量的变化。我们为视觉定位实现了一种新颖的UI,由虚拟现实(VR)和增强现实(AR)视图组成,可以主动交流并确保定位准确性。如果有必要,系统会鼓励用户将智能手机指向不同的地区,以提高本地化质量。我们使用原型在一个实验性导航任务中评估UI,并根据使用设计模型的初始评估结果进行评估。我们发现,即使在位置和方向精度不可靠的情况下,VR也可以为高效有效的室内导航做出贡献。我们讨论已确定的挑战,并分享经验教训,作为对今后工作的建议。
{"title":"Experimental evaluation of user interfaces for visual indoor navigation","authors":"Andreas Möller, M. Kranz, Stefan Diewald, L. Roalter, Robert Huitl, T. Stockinger, Marion Koelle, Patrick Lindemann","doi":"10.1145/2556288.2557003","DOIUrl":"https://doi.org/10.1145/2556288.2557003","url":null,"abstract":"Mobile location recognition by capturing images of the environment (visual localization) is a promising technique for indoor navigation in arbitrary surroundings. However, it has barely been investigated so far how the user interface (UI) can cope with the challenges of the vision-based localization technique, such as varying quality of the query images. We implemented a novel UI for visual localization, consisting of Virtual Reality (VR) and Augmented Reality (AR) views that actively communicate and ensure localization accuracy. If necessary, the system encourages the user to point the smartphone at distinctive regions to improve localization quality. We evaluated the UI in a experimental navigation task with a prototype, informed by initial evaluation results using design mockups. We found that VR can contribute to efficient and effective indoor navigation even at unreliable location and orientation accuracy. We discuss identified challenges and share lessons learned as recommendations for future work.","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"157 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86331057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Money talks: tracking personal finances 金钱万能:追踪个人财务状况
Pub Date : 2014-04-26 DOI: 10.1145/2556288.2556975
Joseph Kaye, Mary McCuistion, Rebecca Gulotta, David A. Shamma
How do people keep track of their money? In this paper we present a preliminary scoping study of how 14 individuals in the San Francisco Bay Area earn, save, spend and understand money and their personal and family finances. We describe the practices we developed for exploring the sensitive topic of money, and then discuss three sets of findings. The first is the emotional component of the relationship people have with their finances. Second, we discuss the tools and processes people used to keep track of their financial situation. Finally we discuss how people account for the unknown and unpredictable nature of the future through their financial decisions. We conclude by discussing the future of studies of money and finance in HCI, and reflect on the opportunities for improving tools to aid people in managing and planning their finances.
人们如何记录他们的钱?在本文中,我们对旧金山湾区的14个人如何赚钱、储蓄、消费和理解金钱及其个人和家庭财务状况进行了初步的范围研究。我们描述了我们为探索金钱这个敏感话题而开发的实践,然后讨论了三组发现。首先是人们与财务关系的情感成分。其次,我们讨论了人们用来跟踪其财务状况的工具和流程。最后,我们讨论了人们如何通过他们的财务决策来考虑未来的未知和不可预测的本质。最后,我们讨论了HCI中货币和金融研究的未来,并反思了改进工具以帮助人们管理和规划财务的机会。
{"title":"Money talks: tracking personal finances","authors":"Joseph Kaye, Mary McCuistion, Rebecca Gulotta, David A. Shamma","doi":"10.1145/2556288.2556975","DOIUrl":"https://doi.org/10.1145/2556288.2556975","url":null,"abstract":"How do people keep track of their money? In this paper we present a preliminary scoping study of how 14 individuals in the San Francisco Bay Area earn, save, spend and understand money and their personal and family finances. We describe the practices we developed for exploring the sensitive topic of money, and then discuss three sets of findings. The first is the emotional component of the relationship people have with their finances. Second, we discuss the tools and processes people used to keep track of their financial situation. Finally we discuss how people account for the unknown and unpredictable nature of the future through their financial decisions. We conclude by discussing the future of studies of money and finance in HCI, and reflect on the opportunities for improving tools to aid people in managing and planning their finances.","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88657724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 97
EverTutor: automatically creating interactive guided tutorials on smartphones by user demonstration EverTutor:通过用户演示自动创建智能手机上的交互式指导教程
Pub Date : 2014-04-26 DOI: 10.1145/2556288.2557407
Cheng-Yao Wang, Wei-Chen Chu, Hou-Ren Chen, Chun-Yen Hsu, Mike Y. Chen
We present EverTutor, a system that automatically generates interactive tutorials on smartphone from user demonstration. For tutorial authors, it simplifies the tutorial creation. For tutorial users, it provides contextual step-by-step guidance and avoids the frequent context switching between tutorials and users' primary tasks. In order to generate the tutorials automatically, EverTutor records low-level touch events to detect gestures and identify on-screen targets. When a tutorial is browsed, the system uses vision-based techniques to locate the target regions and overlays the corresponding input prompt contextually. It also identifies the correctness of users' interaction to guide the users step by step. We conducted a 6-person user study for creating tutorials and a 12-person user study for browsing tutorials, and we compared EverTutor's interactive tutorials to static and video ones. Study results show that creating tutorials by EverTutor is simpler and faster than producing static and video tutorials. Also, when using the tutorials, the task completion time for interactive tutorials were 3-6 times faster than static and video tutorials regardless of age group. In terms of user preference, 83% of the users chose interactive type as the preferred tutorial type and rated it easiest to follow and easiest to understand.
我们介绍了EverTutor,一个从用户演示中自动生成智能手机交互式教程的系统。对于教程作者来说,它简化了教程的创建。对于教程用户,它提供上下文逐步指导,避免教程和用户的主要任务之间频繁的上下文切换。为了自动生成教程,EverTutor记录低级触摸事件来检测手势和识别屏幕上的目标。当浏览教程时,系统使用基于视觉的技术来定位目标区域,并根据上下文覆盖相应的输入提示。识别用户交互的正确性,逐步指导用户。我们对创建教程进行了6人用户研究,对浏览教程进行了12人用户研究,并将EverTutor的交互式教程与静态教程和视频教程进行了比较。研究结果表明,通过EverTutor创建教程比制作静态和视频教程更简单、更快。此外,在使用教程时,无论年龄组,交互式教程的任务完成时间都比静态和视频教程快3-6倍。在用户偏好方面,83%的用户选择互动类型作为首选教程类型,并认为它最容易遵循和最容易理解。
{"title":"EverTutor: automatically creating interactive guided tutorials on smartphones by user demonstration","authors":"Cheng-Yao Wang, Wei-Chen Chu, Hou-Ren Chen, Chun-Yen Hsu, Mike Y. Chen","doi":"10.1145/2556288.2557407","DOIUrl":"https://doi.org/10.1145/2556288.2557407","url":null,"abstract":"We present EverTutor, a system that automatically generates interactive tutorials on smartphone from user demonstration. For tutorial authors, it simplifies the tutorial creation. For tutorial users, it provides contextual step-by-step guidance and avoids the frequent context switching between tutorials and users' primary tasks. In order to generate the tutorials automatically, EverTutor records low-level touch events to detect gestures and identify on-screen targets. When a tutorial is browsed, the system uses vision-based techniques to locate the target regions and overlays the corresponding input prompt contextually. It also identifies the correctness of users' interaction to guide the users step by step. We conducted a 6-person user study for creating tutorials and a 12-person user study for browsing tutorials, and we compared EverTutor's interactive tutorials to static and video ones. Study results show that creating tutorials by EverTutor is simpler and faster than producing static and video tutorials. Also, when using the tutorials, the task completion time for interactive tutorials were 3-6 times faster than static and video tutorials regardless of age group. In terms of user preference, 83% of the users chose interactive type as the preferred tutorial type and rated it easiest to follow and easiest to understand.","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"75 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89015037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
The effects of embodied persuasive games on player attitudes toward people using wheelchairs 具身说服游戏对玩家对轮椅使用者态度的影响
Pub Date : 2014-04-26 DOI: 10.1145/2556288.2556962
K. Gerling, R. Mandryk, M. Birk, Matthew K. Miller, Rita Orji
People using wheelchairs face barriers in their daily lives, many of which are created by people who surround them. Promoting positive attitudes towards persons with disabilities is an integral step in removing these barriers and improving their quality of life. In this context, persuasive games offer an opportunity of encouraging attitude change. We created a wheelchair-controlled persuasive game to study how embodied interaction can be applied to influence player attitudes over time. Our results show that the game intervention successfully raised awareness for challenges that people using wheelchairs face, and that embodied interaction is a more effective approach than traditional input in terms of retaining attitude change over time. Based on these findings, we provide design strategies for embodied interaction in persuasive games, and outline how our findings can be leveraged to help designers create effective persuasive experiences beyond games.
使用轮椅的人在日常生活中面临障碍,其中许多是由他们周围的人造成的。促进对残疾人的积极态度是消除这些障碍和改善其生活质量的一个不可或缺的步骤。在这种情况下,劝导游戏提供了一个鼓励改变态度的机会。我们创造了一个轮椅控制的说服性游戏来研究具体化互动如何随着时间的推移影响玩家的态度。我们的研究结果表明,游戏干预成功地提高了人们对轮椅使用者所面临挑战的认识,在保持态度随时间变化方面,体现互动是一种比传统输入更有效的方法。基于这些发现,我们提供了说服性游戏中具身互动的设计策略,并概述了如何利用我们的发现帮助设计师在游戏之外创造有效的说服性体验。
{"title":"The effects of embodied persuasive games on player attitudes toward people using wheelchairs","authors":"K. Gerling, R. Mandryk, M. Birk, Matthew K. Miller, Rita Orji","doi":"10.1145/2556288.2556962","DOIUrl":"https://doi.org/10.1145/2556288.2556962","url":null,"abstract":"People using wheelchairs face barriers in their daily lives, many of which are created by people who surround them. Promoting positive attitudes towards persons with disabilities is an integral step in removing these barriers and improving their quality of life. In this context, persuasive games offer an opportunity of encouraging attitude change. We created a wheelchair-controlled persuasive game to study how embodied interaction can be applied to influence player attitudes over time. Our results show that the game intervention successfully raised awareness for challenges that people using wheelchairs face, and that embodied interaction is a more effective approach than traditional input in terms of retaining attitude change over time. Based on these findings, we provide design strategies for embodied interaction in persuasive games, and outline how our findings can be leveraged to help designers create effective persuasive experiences beyond games.","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75981765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Session details: Personal health and wellbeing 会议细节:个人健康和幸福
J. Huh
{"title":"Session details: Personal health and wellbeing","authors":"J. Huh","doi":"10.1145/3251010","DOIUrl":"https://doi.org/10.1145/3251010","url":null,"abstract":"","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76332054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Learning and education 会议详情:学习与教育
D. Tatar
{"title":"Session details: Learning and education","authors":"D. Tatar","doi":"10.1145/3251024","DOIUrl":"https://doi.org/10.1145/3251024","url":null,"abstract":"","PeriodicalId":20599,"journal":{"name":"Proceedings of the SIGCHI Conference on Human Factors in Computing Systems","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80124112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1