首页 > 最新文献

Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services最新文献

英文 中文
Speech and Hands-free interaction: myths, challenges, and opportunities 语音和免提交互:神话、挑战和机遇
Cosmin Munteanu, Gerald Penn
HCI research has for long been dedicated to better and more naturally facilitating information transfer between humans and machines. Unfortunately, humans' most natural form of communication, speech, is also one of the most difficult modalities to be understood by machines - despite, and perhaps, because it is the highest-bandwidth communication channel we possess. While significant research efforts, from engineering, to linguistic, and to cognitive sciences, have been spent on improving machines' ability to understand speech, the MobileHCI community (and the HCI field at large) has been relatively timid in embracing this modality as a central focus of research. This can be attributed in part to the unexpected variations in error rates when processing speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating speech and natural language interfaces. As such, the development of interactive speech-based systems is mostly driven by engineering efforts to improve such systems with respect to largely arbitrary performance metrics. Such developments have often been void of any user-centered design principles or consideration for usability or usefulness. The goal of this course is to inform the MobileHCI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. Through this, we hope that HCI researchers and practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.
长期以来,人机交互研究一直致力于更好、更自然地促进人与机器之间的信息传递。不幸的是,人类最自然的交流形式——语言,也是机器最难理解的形式之一——尽管,也许,因为它是我们拥有的最高带宽的交流渠道。虽然从工程学、语言学到认知科学的重大研究努力都花在了提高机器理解语音的能力上,但MobileHCI社区(以及整个HCI领域)在将这种模式作为研究的中心焦点方面一直相对胆怯。这可以部分归因于处理语音时错误率的意外变化,这与工业界经常毫无根据的成功声明形成对比,但也归因于设计,特别是评估语音和自然语言界面的内在困难。因此,基于语音的交互式系统的开发主要是由工程努力驱动的,目的是根据很大程度上任意的性能指标来改进此类系统。这样的开发通常缺乏任何以用户为中心的设计原则或对可用性或有用性的考虑。本课程的目标是向MobileHCI社区介绍语音和自然语言研究的现状,消除围绕基于语音的交互的一些神话,并为研究人员和从业者提供机会,了解更多关于语音识别和语音合成如何工作,它们的局限性是什么,以及如何使用它们来增强当前的交互范式。通过这项研究,我们希望HCI研究人员和实践者能够学习如何将语音处理的最新进展与以用户为中心的原则结合起来,设计出更可用和有用的基于语音的交互系统。
{"title":"Speech and Hands-free interaction: myths, challenges, and opportunities","authors":"Cosmin Munteanu, Gerald Penn","doi":"10.1145/3098279.3119919","DOIUrl":"https://doi.org/10.1145/3098279.3119919","url":null,"abstract":"HCI research has for long been dedicated to better and more naturally facilitating information transfer between humans and machines. Unfortunately, humans' most natural form of communication, speech, is also one of the most difficult modalities to be understood by machines - despite, and perhaps, because it is the highest-bandwidth communication channel we possess. While significant research efforts, from engineering, to linguistic, and to cognitive sciences, have been spent on improving machines' ability to understand speech, the MobileHCI community (and the HCI field at large) has been relatively timid in embracing this modality as a central focus of research. This can be attributed in part to the unexpected variations in error rates when processing speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating speech and natural language interfaces. As such, the development of interactive speech-based systems is mostly driven by engineering efforts to improve such systems with respect to largely arbitrary performance metrics. Such developments have often been void of any user-centered design principles or consideration for usability or usefulness. The goal of this course is to inform the MobileHCI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. Through this, we hope that HCI researchers and practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122089485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
PeriMR: a prototyping tool for head-mounted peripheral light displays in mixed reality PeriMR:混合现实中头戴式周边光显示器的原型工具
Uwe Gruenefeld, Tim Claudius Stratmann, Wilko Heuten, Susanne CJ Boll
Nowadays, Mixed and Virtual Reality devices suffer from a field of view that is too small compared to human visual perception. Although a larger field of view is useful (e.g., conveying peripheral information or improving situation awareness), technical limitations prevent the extension of the field-of-view. A way to overcome these limitations is to extend the field-of-view with peripheral light displays. However, there are no tools to support the design of peripheral light displays for Mixed or Virtual Reality devices. Therefore, we present our prototyping tool PeriMR that allows researchers to develop new peripheral head-mounted light displays for Mixed and Virtual Reality.
如今,混合现实和虚拟现实设备的视野与人类的视觉感知相比太小了。虽然更大的视野是有用的(例如,传达周边信息或改善情况感知),但技术限制阻止了视野的扩展。克服这些限制的一种方法是用周边光显示器扩展视野。然而,目前还没有工具来支持混合现实或虚拟现实设备的外围光显示设计。因此,我们提出了我们的原型工具PeriMR,它允许研究人员为混合现实和虚拟现实开发新的外围头戴式光显示器。
{"title":"PeriMR: a prototyping tool for head-mounted peripheral light displays in mixed reality","authors":"Uwe Gruenefeld, Tim Claudius Stratmann, Wilko Heuten, Susanne CJ Boll","doi":"10.1145/3098279.3125439","DOIUrl":"https://doi.org/10.1145/3098279.3125439","url":null,"abstract":"Nowadays, Mixed and Virtual Reality devices suffer from a field of view that is too small compared to human visual perception. Although a larger field of view is useful (e.g., conveying peripheral information or improving situation awareness), technical limitations prevent the extension of the field-of-view. A way to overcome these limitations is to extend the field-of-view with peripheral light displays. However, there are no tools to support the design of peripheral light displays for Mixed or Virtual Reality devices. Therefore, we present our prototyping tool PeriMR that allows researchers to develop new peripheral head-mounted light displays for Mixed and Virtual Reality.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117201133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Prototyping sonic interaction for walking 行走的声音交互原型
Nassrin Hajinejad, Barbara Grüter, Licinio Gomes Roque
Sounds play a substantial role in the experience of movement activities such as walking. Drawing on the movement inducing effects of sound, sonic interaction opens up numerous possibilities to modify the walker's movements and experience. We argue that designing sonic interaction for movement activities demands an experiential awareness of the interplay of sound, body movement and use situation, and, propose a prototyping method to understand possibilities and challenges related to the design of mobile sonic interaction. In this paper, we present a rapid prototyping system that enables non-expert users to design sonic interaction for walking and to experience their design in the real-world context. We discuss the way this prototyping system allows designers to experience how their design ideas unfold in mobile use and affect the walking.
声音在行走等运动活动的体验中起着重要作用。利用声音的运动诱导效果,声音交互为改变步行者的运动和体验提供了许多可能性。我们认为,为运动活动设计声音交互需要对声音、身体运动和使用情况的相互作用有体验意识,并提出了一种原型方法来理解与移动声音交互设计相关的可能性和挑战。在本文中,我们提出了一个快速原型系统,使非专业用户能够设计步行的声音交互,并在现实世界中体验他们的设计。我们讨论了这个原型系统如何让设计师体验他们的设计理念如何在移动使用中展开并影响步行。
{"title":"Prototyping sonic interaction for walking","authors":"Nassrin Hajinejad, Barbara Grüter, Licinio Gomes Roque","doi":"10.1145/3098279.3122141","DOIUrl":"https://doi.org/10.1145/3098279.3122141","url":null,"abstract":"Sounds play a substantial role in the experience of movement activities such as walking. Drawing on the movement inducing effects of sound, sonic interaction opens up numerous possibilities to modify the walker's movements and experience. We argue that designing sonic interaction for movement activities demands an experiential awareness of the interplay of sound, body movement and use situation, and, propose a prototyping method to understand possibilities and challenges related to the design of mobile sonic interaction. In this paper, we present a rapid prototyping system that enables non-expert users to design sonic interaction for walking and to experience their design in the real-world context. We discuss the way this prototyping system allows designers to experience how their design ideas unfold in mobile use and affect the walking.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128434304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Language learning on-the-go: opportune moments and design of mobile microlearning sessions 即时语言学习:移动微学习的时机与设计
Tilman Dingler, Dominik Weber, M. Pielot, J. Cooper, Chung-Cheng Chang, N. Henze
Learning a foreign language is a daunting and time-consuming task. People often lack the time or motivation to sit down and engage with learning content on a regular basis. We present an investigation of microlearning sessions on mobile phones, in which we focus on session triggers, presentation methods, and user context. Therefore, we built an Android app that prompts users to review foreign language vocabulary directly through notifications or through app usage across the day. We present results from a controlled and an in-the-wild study, in which we explore engagement and user context. In-app sessions lasted longer, but notifications added a significant number of "quick" learning sessions. 37.6% of sessions were completed in transit, hence learning-on-the-go was well received. Neither the use of boredom as trigger nor the presentation (flashcard and multiple-choice) had a significant effect. We conclude with implications for the design of mobile microlearning applications with context-awareness.
学习一门外语是一项艰巨而耗时的任务。人们通常缺乏时间或动力坐下来定期学习内容。我们对移动电话上的微学习会话进行了调查,其中我们关注会话触发、呈现方法和用户上下文。因此,我们开发了一款Android应用程序,通过通知或应用程序的使用情况,直接提示用户复习外语词汇。我们展示了一项对照研究和一项野外研究的结果,其中我们探索了用户粘性和用户环境。应用内部会话持续时间更长,但通知增加了大量“快速”学习会话。37.6%的课程是在途中完成的,因此“边走边学”受到欢迎。使用无聊作为触发因素和演示(抽认卡和多项选择)都没有显著的效果。我们总结了具有上下文感知的移动微学习应用程序的设计含义。
{"title":"Language learning on-the-go: opportune moments and design of mobile microlearning sessions","authors":"Tilman Dingler, Dominik Weber, M. Pielot, J. Cooper, Chung-Cheng Chang, N. Henze","doi":"10.1145/3098279.3098565","DOIUrl":"https://doi.org/10.1145/3098279.3098565","url":null,"abstract":"Learning a foreign language is a daunting and time-consuming task. People often lack the time or motivation to sit down and engage with learning content on a regular basis. We present an investigation of microlearning sessions on mobile phones, in which we focus on session triggers, presentation methods, and user context. Therefore, we built an Android app that prompts users to review foreign language vocabulary directly through notifications or through app usage across the day. We present results from a controlled and an in-the-wild study, in which we explore engagement and user context. In-app sessions lasted longer, but notifications added a significant number of \"quick\" learning sessions. 37.6% of sessions were completed in transit, hence learning-on-the-go was well received. Neither the use of boredom as trigger nor the presentation (flashcard and multiple-choice) had a significant effect. We conclude with implications for the design of mobile microlearning applications with context-awareness.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130573573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 54
TapSense: combining self-report patterns and typing characteristics for smartphone based emotion detection TapSense:结合自我报告模式和打字特征,用于基于智能手机的情绪检测
Surjya Ghosh, Niloy Ganguly, Bivas Mitra, Pradipta De
Typing based communication applications on smartphones, like WhatsApp, can induce emotional exchanges. The effects of an emotion in one session of communication can persist across sessions. In this work, we attempt automatic emotion detection by jointly modeling the typing characteristics, and the persistence of emotion. Typing characteristics, like speed, number of mistakes, special characters used, are inferred from typing sessions. Self reports recording emotion states after typing sessions capture persistence of emotion. We use this data to train a personalized machine learning model for multi-state emotion classification. We implemented an Android based smartphone application, called TapSense, that records typing related metadata, and uses a carefully designed Experience Sampling Method (ESM) to collect emotion self reports. We are able to classify four emotion states - happy, sad, stressed, and relaxed, with an average accuracy (AUCROC) of 84% for a group of 22 participants who installed and used TapSense for 3 weeks.
智能手机上基于打字的通信应用程序,比如WhatsApp,可以引发情感交流。一种情绪在一次交流中的影响可能会在不同的交流中持续存在。在这项工作中,我们尝试通过联合建模类型特征和情感的持久性来实现自动情感检测。打字特征,如速度、错误数量、使用的特殊字符,都是从打字会话中推断出来的。打字后记录情绪状态的自我报告捕捉到了情绪的持久性。我们使用这些数据来训练用于多状态情感分类的个性化机器学习模型。我们实现了一个基于Android的智能手机应用程序,名为TapSense,它记录打字相关的元数据,并使用精心设计的体验采样方法(ESM)来收集情绪自我报告。我们能够对四种情绪状态进行分类——快乐、悲伤、紧张和放松,在22名安装并使用TapSense 3周的参与者中,平均准确率(AUCROC)为84%。
{"title":"TapSense: combining self-report patterns and typing characteristics for smartphone based emotion detection","authors":"Surjya Ghosh, Niloy Ganguly, Bivas Mitra, Pradipta De","doi":"10.1145/3098279.3098564","DOIUrl":"https://doi.org/10.1145/3098279.3098564","url":null,"abstract":"Typing based communication applications on smartphones, like WhatsApp, can induce emotional exchanges. The effects of an emotion in one session of communication can persist across sessions. In this work, we attempt automatic emotion detection by jointly modeling the typing characteristics, and the persistence of emotion. Typing characteristics, like speed, number of mistakes, special characters used, are inferred from typing sessions. Self reports recording emotion states after typing sessions capture persistence of emotion. We use this data to train a personalized machine learning model for multi-state emotion classification. We implemented an Android based smartphone application, called TapSense, that records typing related metadata, and uses a carefully designed Experience Sampling Method (ESM) to collect emotion self reports. We are able to classify four emotion states - happy, sad, stressed, and relaxed, with an average accuracy (AUCROC) of 84% for a group of 22 participants who installed and used TapSense for 3 weeks.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"6 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133007536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Improving software-reduced touchscreen latency 改进软件减少的触摸屏延迟
N. Henze, Sven Mayer, Huy Viet Le, V. Schwind
The latency of current mobile devices' touchscreens is around 100ms and has widely been explored. Latency down to 2ms is noticeable, and latency as low as 25ms reduces users' performance. Previous work reduced touch latency by extrapolating a finger's movement using an ensemble of shallow neural networks and showed that predicting 33ms into the future increases users' performance. Unfortunately, this prediction has a high error. Predicting beyond 33ms did not increase participants' performance, and the error affected the subjective assessment. We use more recent machine learning techniques to reduce the prediction error. We train LSTM networks and multilayer perceptrons using a large data set and regularization. We show that linear extrapolation causes an 116.7% higher error and the previously proposed ensembles of shallow networks cause a 26.7% higher error compared to the LSTM networks. The trained models, the data used for testing, and the source code is available on GitHub.
目前移动设备触摸屏的延迟大约在100毫秒左右,这已经得到了广泛的研究。延迟低至2ms是值得注意的,延迟低至25ms会降低用户的性能。先前的工作通过使用浅层神经网络集合推断手指的运动来减少触摸延迟,并表明预测未来33毫秒会提高用户的表现。不幸的是,这种预测有很高的误差。超过33毫秒的预测不会提高参与者的表现,而且误差会影响主观评估。我们使用最新的机器学习技术来减少预测误差。我们使用大数据集和正则化训练LSTM网络和多层感知器。我们发现,与LSTM网络相比,线性外推导致的误差增加了116.7%,而先前提出的浅层网络集成导致的误差增加了26.7%。经过训练的模型、用于测试的数据和源代码可在GitHub上获得。
{"title":"Improving software-reduced touchscreen latency","authors":"N. Henze, Sven Mayer, Huy Viet Le, V. Schwind","doi":"10.1145/3098279.3122150","DOIUrl":"https://doi.org/10.1145/3098279.3122150","url":null,"abstract":"The latency of current mobile devices' touchscreens is around 100ms and has widely been explored. Latency down to 2ms is noticeable, and latency as low as 25ms reduces users' performance. Previous work reduced touch latency by extrapolating a finger's movement using an ensemble of shallow neural networks and showed that predicting 33ms into the future increases users' performance. Unfortunately, this prediction has a high error. Predicting beyond 33ms did not increase participants' performance, and the error affected the subjective assessment. We use more recent machine learning techniques to reduce the prediction error. We train LSTM networks and multilayer perceptrons using a large data set and regularization. We show that linear extrapolation causes an 116.7% higher error and the previously proposed ensembles of shallow networks cause a 26.7% higher error compared to the LSTM networks. The trained models, the data used for testing, and the source code is available on GitHub.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123785485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
CETA: open, affordable and portable mixed-reality environment for low-cost tablets CETA:面向低成本平板电脑的开放、经济、便携的混合现实环境
Sebastián Marichal, A. Rosales, Gustavo Sansone, A. Pires, Ewelina Bakala, Fernando González Perilli, J. Blat
Mixed-reality environments allow to combine tangible interaction with digital feedback, empowering interaction designers to take benefits from both real and virtual worlds. This interaction paradigm is also being applied in classrooms for learning purposes. However, most of the times the devices supporting mixed-reality interaction are neither portable nor affordable, which could be a limitation in the learning context. In this paper we propose CETA, a mixed-reality environment using low-cost Android tablets which tackles portability and costs issues. In addition, CETA is open-source, reproducible and extensible.
混合现实环境允许将有形交互与数字反馈相结合,使交互设计师能够从真实和虚拟世界中获益。这种交互模式也被应用于课堂学习。然而,大多数时候,支持混合现实交互的设备既不便携,也不便宜,这可能是学习环境中的一个限制。在本文中,我们提出了CETA,这是一个使用低成本Android平板电脑的混合现实环境,它解决了可移植性和成本问题。此外,CETA是开源的、可复制的和可扩展的。
{"title":"CETA: open, affordable and portable mixed-reality environment for low-cost tablets","authors":"Sebastián Marichal, A. Rosales, Gustavo Sansone, A. Pires, Ewelina Bakala, Fernando González Perilli, J. Blat","doi":"10.1145/3098279.3125435","DOIUrl":"https://doi.org/10.1145/3098279.3125435","url":null,"abstract":"Mixed-reality environments allow to combine tangible interaction with digital feedback, empowering interaction designers to take benefits from both real and virtual worlds. This interaction paradigm is also being applied in classrooms for learning purposes. However, most of the times the devices supporting mixed-reality interaction are neither portable nor affordable, which could be a limitation in the learning context. In this paper we propose CETA, a mixed-reality environment using low-cost Android tablets which tackles portability and costs issues. In addition, CETA is open-source, reproducible and extensible.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116780383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Crafting collocated interactions: exploring physical representations of personal data 精心制作搭配互动:探索个人数据的物理表示
Maria Karyda
This PhD project explores a third wave of research on Mobile Collocated Interactions, which focuses on craft. Strongly inspired by the field of Data Physicalization it aims to explore how would people physically share (physiological) personal data in collocated activities. In achieving that it investigates potential relationships between personal data and meaningful personal objects for individuals. Future steps involve prototyping towards crafting collocated interactions with personal data.
这个博士项目探索了移动协同交互的第三波研究,其重点是工艺。受到数据物理化领域的强烈启发,它旨在探索人们如何在协同活动中物理地共享(生理)个人数据。为了实现这一目标,它调查了个人数据与个人有意义的个人对象之间的潜在关系。未来的步骤包括制作原型,以制作与个人数据搭配的交互。
{"title":"Crafting collocated interactions: exploring physical representations of personal data","authors":"Maria Karyda","doi":"10.1145/3098279.3119927","DOIUrl":"https://doi.org/10.1145/3098279.3119927","url":null,"abstract":"This PhD project explores a third wave of research on Mobile Collocated Interactions, which focuses on craft. Strongly inspired by the field of Data Physicalization it aims to explore how would people physically share (physiological) personal data in collocated activities. In achieving that it investigates potential relationships between personal data and meaningful personal objects for individuals. Future steps involve prototyping towards crafting collocated interactions with personal data.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122701610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The UX of IoT: unpacking the internet of things 物联网的用户体验:打开物联网的包装
Scott Jenson
When discussing the Internet of Things (IoT), product concepts usually involve overly complex systems with baroque-like setup and confusing behaviors. This workshop will step a bit back from the hype and create a richer, more nuanced way of talking about the IoT. The workshop will start with a structure to the UX of IoT, creating a UX taxonomy and then challenge participants to "think small". Special focus will be put on the Physical Web, a lightweight technology that lets any place or device wirelessly broadcast a URL, unlocking very simple and lightweight interactions. Participants will be provoked to think: how can we reduce an IoT concept to the bare minimum? Can we focus on user needs and not be carried away by the technology to create something lightweight and simple? Workshop participants are expected to come prepared with one or two IoT scenarios they would like to work on; then, through a series of exercises, refine one of these down into a much simpler, user-focused design.
在讨论物联网(IoT)时,产品概念通常涉及过于复杂的系统,具有巴洛克式的设置和令人困惑的行为。本次研讨会将从炒作中退后一步,创造一种更丰富、更细致的方式来谈论物联网。研讨会将从物联网用户体验的结构开始,创建一个用户体验分类,然后挑战参与者“从小处思考”。重点将放在物理网络上,这是一种轻量级技术,可以让任何地方或设备无线广播URL,解锁非常简单和轻量级的交互。参与者将被激发思考:我们如何将物联网概念减少到最低限度?我们能否专注于用户需求,而不是被技术冲昏头脑,去创造一些轻量级和简单的东西?预计研讨会参与者将准备一到两个他们想要研究的物联网场景;然后,通过一系列练习,将其中一个改进为更简单的、以用户为中心的设计。
{"title":"The UX of IoT: unpacking the internet of things","authors":"Scott Jenson","doi":"10.1145/3098279.3119838","DOIUrl":"https://doi.org/10.1145/3098279.3119838","url":null,"abstract":"When discussing the Internet of Things (IoT), product concepts usually involve overly complex systems with baroque-like setup and confusing behaviors. This workshop will step a bit back from the hype and create a richer, more nuanced way of talking about the IoT. The workshop will start with a structure to the UX of IoT, creating a UX taxonomy and then challenge participants to \"think small\". Special focus will be put on the Physical Web, a lightweight technology that lets any place or device wirelessly broadcast a URL, unlocking very simple and lightweight interactions. Participants will be provoked to think: how can we reduce an IoT concept to the bare minimum? Can we focus on user needs and not be carried away by the technology to create something lightweight and simple? Workshop participants are expected to come prepared with one or two IoT scenarios they would like to work on; then, through a series of exercises, refine one of these down into a much simpler, user-focused design.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133928513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Creating community fountains by (re-)designing the digital layer of way-finding pillars 通过(重新)设计导路柱的数字层来创建社区喷泉
Katta Spiel, Katharina Werner, Oliver Hödl, Lisa Ehrenstrasser, G. Fitzpatrick
Way-finding pillars for tourists aid them in navigating an unknown area. The pillars show nearby points of interest, offer information about public transport and provide a scale for the neighbourhood. Through a series of studies with tourists and locals, we establish their different needs. In this space, we developed Mappy, a mobile application which augments and enhances way-finding pillars with an added digital layer. Mappy opens up opportunities for reappropriation of, and engagement with, the pillars. Seeing the pillars beyond their initial use case by involving a diverse range of people let us develop the digital layer and subsequently overall meaning of way-finding pillars further: as "community fountains" they engage locals and tourists alike and can provoke encounters between them.
为游客提供的指路柱可以帮助他们在未知地区导航。这些柱子显示了附近的景点,提供了公共交通的信息,并为社区提供了一个尺度。通过对游客和当地人的一系列研究,我们确定了他们不同的需求。在这个领域,我们开发了Mappy,这是一个移动应用程序,它通过增加的数字层来增强和增强寻路支柱。Mappy为重新利用和参与支柱提供了机会。通过让不同的人参与进来,我们超越了最初的使用案例,让我们进一步发展了数字层,并随后进一步发展了指路柱的整体意义:作为“社区喷泉”,它们吸引了当地人和游客,并可以激发他们之间的相遇。
{"title":"Creating community fountains by (re-)designing the digital layer of way-finding pillars","authors":"Katta Spiel, Katharina Werner, Oliver Hödl, Lisa Ehrenstrasser, G. Fitzpatrick","doi":"10.1145/3098279.3122135","DOIUrl":"https://doi.org/10.1145/3098279.3122135","url":null,"abstract":"Way-finding pillars for tourists aid them in navigating an unknown area. The pillars show nearby points of interest, offer information about public transport and provide a scale for the neighbourhood. Through a series of studies with tourists and locals, we establish their different needs. In this space, we developed Mappy, a mobile application which augments and enhances way-finding pillars with an added digital layer. Mappy opens up opportunities for reappropriation of, and engagement with, the pillars. Seeing the pillars beyond their initial use case by involving a diverse range of people let us develop the digital layer and subsequently overall meaning of way-finding pillars further: as \"community fountains\" they engage locals and tourists alike and can provoke encounters between them.","PeriodicalId":120153,"journal":{"name":"Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131928288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1