首页 > 最新文献

Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services最新文献

英文 中文
Talkin' about the weather: incorporating TalkBack functionality and sonifications for accessible app design 谈论天气:结合TalkBack功能和可访问应用程序设计的声音
Brianna J. Tomlinson, Jonathan H. Schuett, Woodbury Shortridge, Jehoshaph Chandran, B. Walker
As ubiquitous as weather is in our daily lives, individuals with vision impairments endure poorly designed user experiences when attempting to check the weather on their mobile devices. This is primarily caused by a mismatch between the visually based information layout on screen and the order in which a screen reader, such as TalkBack or VoiceOver, presents the information to users with visual impairments. Additionally, any image or icon included on the screen presents no information to the user if they are not able to see it. Therefore we created the Accessible Weather App to run on Android and integrate with the TalkBack accessibility feature that is already available on the operating system. We also included a set of auditory weather icons which use sound, rather than visuals, to convey current weather conditions to users in a fast and pleasant way. This paper discusses the process for determining what features the users' would want and require, as well as our methodology for evaluating the beta version of our app.
就像天气在我们的日常生活中无处不在一样,视力受损的人在试图在移动设备上查看天气时,忍受着糟糕的用户体验设计。这主要是由于屏幕上基于视觉的信息布局与屏幕阅读器(如TalkBack或VoiceOver)向有视觉障碍的用户呈现信息的顺序不匹配造成的。此外,如果用户无法看到屏幕上包含的任何图像或图标,则不会向用户显示任何信息。因此,我们创建了在Android上运行的无障碍天气应用程序,并整合了已经在操作系统上可用的TalkBack辅助功能。我们还包括了一组听觉天气图标,它们使用声音而不是视觉,以一种快速而愉快的方式向用户传达当前的天气状况。本文讨论了确定用户想要和需要哪些功能的过程,以及我们评估应用测试版的方法。
{"title":"Talkin' about the weather: incorporating TalkBack functionality and sonifications for accessible app design","authors":"Brianna J. Tomlinson, Jonathan H. Schuett, Woodbury Shortridge, Jehoshaph Chandran, B. Walker","doi":"10.1145/2935334.2935390","DOIUrl":"https://doi.org/10.1145/2935334.2935390","url":null,"abstract":"As ubiquitous as weather is in our daily lives, individuals with vision impairments endure poorly designed user experiences when attempting to check the weather on their mobile devices. This is primarily caused by a mismatch between the visually based information layout on screen and the order in which a screen reader, such as TalkBack or VoiceOver, presents the information to users with visual impairments. Additionally, any image or icon included on the screen presents no information to the user if they are not able to see it. Therefore we created the Accessible Weather App to run on Android and integrate with the TalkBack accessibility feature that is already available on the operating system. We also included a set of auditory weather icons which use sound, rather than visuals, to convey current weather conditions to users in a fast and pleasant way. This paper discusses the process for determining what features the users' would want and require, as well as our methodology for evaluating the beta version of our app.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131741928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Changing the camera-to-screen angle to improve AR browser usage 改变摄像头到屏幕的角度,以提高AR浏览器的使用
Ashley Colley, Wouter Van Vlaenderen, Johannes Schöning, Jonna Häkkilä
Mobile devices are currently the most commonly used platform to experience Augmented Reality (AR). Nevertheless, they typically provide a less than ideal ergonomic experience, requiring the user to operate them with arms raised. In this paper we evaluate how to improve the ergonomics of AR experiences by modifying the angle between the mobile device's camera and its display. Whereas current mobile device cameras point out vertically from the back cover, we modify the camera angle to be 0, 45 and 90 degrees. In addition, we also investigate the use of the smartwatch as an AR browser form factor. Key findings are, that whilst the current approximately see-through configuration provides the fastest task completion times, a camera offset angle of 45° provides reduced task load and was preferred by users. When comparing different form factors and screen sizes, the smartwatch format was found to be unsuitable for AR browsing use.
移动设备是目前体验增强现实(AR)最常用的平台。然而,它们通常提供不太理想的符合人体工程学的体验,需要用户举起手臂来操作它们。在本文中,我们评估了如何通过改变移动设备的摄像头和显示器之间的角度来改善增强现实体验的人体工程学。当前的移动设备摄像头从后盖垂直指向,我们将摄像头角度修改为0度、45度和90度。此外,我们还调查了智能手表作为AR浏览器的使用情况。主要发现是,虽然目前的近似透明配置提供了最快的任务完成时间,但相机偏移角度为45°可以减少任务负载,并且受到用户的青睐。在比较不同的形状因素和屏幕尺寸时,发现智能手表的格式不适合AR浏览。
{"title":"Changing the camera-to-screen angle to improve AR browser usage","authors":"Ashley Colley, Wouter Van Vlaenderen, Johannes Schöning, Jonna Häkkilä","doi":"10.1145/2935334.2935384","DOIUrl":"https://doi.org/10.1145/2935334.2935384","url":null,"abstract":"Mobile devices are currently the most commonly used platform to experience Augmented Reality (AR). Nevertheless, they typically provide a less than ideal ergonomic experience, requiring the user to operate them with arms raised. In this paper we evaluate how to improve the ergonomics of AR experiences by modifying the angle between the mobile device's camera and its display. Whereas current mobile device cameras point out vertically from the back cover, we modify the camera angle to be 0, 45 and 90 degrees. In addition, we also investigate the use of the smartwatch as an AR browser form factor. Key findings are, that whilst the current approximately see-through configuration provides the fastest task completion times, a camera offset angle of 45° provides reduced task load and was preferred by users. When comparing different form factors and screen sizes, the smartwatch format was found to be unsuitable for AR browsing use.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122903384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A longitudinal evaluation of the acceptability and impact of a diet diary app for older adults with age-related macular degeneration 对年龄相关性黄斑变性老年人饮食日记应用程序的可接受性和影响的纵向评估
Lilit Hakobyan, J. Lumsden, R. Shaw, D. O’Sullivan
Ongoing advances in technology are increasing the scope for enhancing and supporting older adults' daily living. The digital divide between older and younger adults raises concerns, however, about the suitability of technological solutions for older adults, especially for those with impairments. Taking older adults with Age-Related Macular Degeneration (AMD) as a case study, we used user-centred and participatory design approaches to develop an assistive mobile app for self-monitoring their intake of food [12,13]. In this paper we report on findings of a longitudinal field evaluation of our app that was conducted to investigate how it was received and adopted by older adults with AMD and its impact on their lives. Demonstrating the benefit of applying inclusive design methods for technology for older adults, our findings reveal how the use of the app raises participants' awareness and facilitates self-monitoring of diet, encourages positive (diet) behaviour change, and encourages learning.
技术的不断进步正在扩大改善和支持老年人日常生活的范围。然而,老年人和年轻人之间的数字鸿沟引发了人们的担忧,即技术解决方案是否适合老年人,尤其是那些有障碍的老年人。我们以患有老年性黄斑变性(AMD)的老年人为研究对象,采用以用户为中心的参与式设计方法开发了一款辅助移动应用程序,用于自我监测他们的食物摄入量[12,13]。在本文中,我们报告了对我们的应用程序进行纵向现场评估的结果,该评估旨在调查患有AMD的老年人如何接受和采用该应用程序及其对他们生活的影响。我们的研究结果展示了将包容性设计方法应用于老年人技术的好处,揭示了该应用程序的使用如何提高参与者的意识,促进饮食的自我监控,鼓励积极的(饮食)行为改变,并鼓励学习。
{"title":"A longitudinal evaluation of the acceptability and impact of a diet diary app for older adults with age-related macular degeneration","authors":"Lilit Hakobyan, J. Lumsden, R. Shaw, D. O’Sullivan","doi":"10.1145/2935334.2935356","DOIUrl":"https://doi.org/10.1145/2935334.2935356","url":null,"abstract":"Ongoing advances in technology are increasing the scope for enhancing and supporting older adults' daily living. The digital divide between older and younger adults raises concerns, however, about the suitability of technological solutions for older adults, especially for those with impairments. Taking older adults with Age-Related Macular Degeneration (AMD) as a case study, we used user-centred and participatory design approaches to develop an assistive mobile app for self-monitoring their intake of food [12,13]. In this paper we report on findings of a longitudinal field evaluation of our app that was conducted to investigate how it was received and adopted by older adults with AMD and its impact on their lives. Demonstrating the benefit of applying inclusive design methods for technology for older adults, our findings reveal how the use of the app raises participants' awareness and facilitates self-monitoring of diet, encourages positive (diet) behaviour change, and encourages learning.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114133853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Sender-intended functions of emojis in US messaging 表情符号在美国通讯中的功能
H. Cramer, Paloma de Juan, Joel R. Tetreault
Emojis are an extremely common occurrence in mobile communications, but their meaning is open to interpretation. We investigate motivations for their usage in mobile messaging in the US. This study asked 228 participants for the last time that they used one or more emojis in a conversational message, and collected that message, along with a description of the emojis' intended meaning and function. We discuss functional distinctions between: adding additional emotional or situational meaning, adjusting tone, making a message more engaging to the recipient, conversation management, and relationship maintenance. We discuss lexical placement within messages, as well as social practices. We show that the social and linguistic function of emojis are complex and varied, and that supporting emojis can facilitate important conversational functions.
表情符号在移动通信中非常常见,但其含义有多种解释。我们调查了在美国使用手机短信的动机。这项研究要求228名参与者最后一次在对话信息中使用一个或多个表情符号,并收集该信息,以及对表情符号的预期含义和功能的描述。我们讨论了功能上的区别:增加额外的情感或情境意义,调整语气,使信息对接收者更有吸引力,对话管理和关系维护。我们讨论词汇在信息中的位置,以及社会实践。我们展示了表情符号的社交和语言功能是复杂和多样的,支持表情符号可以促进重要的会话功能。
{"title":"Sender-intended functions of emojis in US messaging","authors":"H. Cramer, Paloma de Juan, Joel R. Tetreault","doi":"10.1145/2935334.2935370","DOIUrl":"https://doi.org/10.1145/2935334.2935370","url":null,"abstract":"Emojis are an extremely common occurrence in mobile communications, but their meaning is open to interpretation. We investigate motivations for their usage in mobile messaging in the US. This study asked 228 participants for the last time that they used one or more emojis in a conversational message, and collected that message, along with a description of the emojis' intended meaning and function. We discuss functional distinctions between: adding additional emotional or situational meaning, adjusting tone, making a message more engaging to the recipient, conversation management, and relationship maintenance. We discuss lexical placement within messages, as well as social practices. We show that the social and linguistic function of emojis are complex and varied, and that supporting emojis can facilitate important conversational functions.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124300824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 104
Motion based remote camera control with mobile devices 基于运动的远程相机控制与移动设备
Sabir Akhadov, M. Lancelle, J. Bazin, M. Gross
With current digital cameras and smartphones, taking photos and videos has never been easier. However, it is still difficult to take a photo of a brief action at the right time. In addition, editing captured videos, such as modifying the playback speed of some parts of a video, remains a time consuming task. In this work we investigate how the motion sensors embedded in mobile devices, such as smartphones, can facilitate camera control. In particular, we show two families of applications: automatic camera trigger control for jump photos and automatic playback speed control (video speed ramping) for action videos. Our approach uses joint devices: a remote camera takes a photo or a video of the scene and it is controlled by the motion sensor of a mobile device, either during or after recording. This allows casual users to achieve visually appealing effects with little effort, even for self portraits.
有了现在的数码相机和智能手机,拍摄照片和视频从来没有这么容易过。然而,在正确的时间拍摄一个简短的动作仍然是困难的。此外,编辑捕获的视频,例如修改视频某些部分的播放速度,仍然是一项耗时的任务。在这项工作中,我们研究了嵌入移动设备(如智能手机)中的运动传感器如何促进相机控制。特别是,我们展示了两个应用系列:用于跳跃照片的自动相机触发控制和用于动作视频的自动播放速度控制(视频速度斜坡)。我们的方法使用联合设备:远程相机拍摄场景的照片或视频,并由移动设备的运动传感器控制,无论是在录制过程中还是录制后。这使得普通用户可以毫不费力地获得视觉上吸引人的效果,即使是自画像。
{"title":"Motion based remote camera control with mobile devices","authors":"Sabir Akhadov, M. Lancelle, J. Bazin, M. Gross","doi":"10.1145/2935334.2935372","DOIUrl":"https://doi.org/10.1145/2935334.2935372","url":null,"abstract":"With current digital cameras and smartphones, taking photos and videos has never been easier. However, it is still difficult to take a photo of a brief action at the right time. In addition, editing captured videos, such as modifying the playback speed of some parts of a video, remains a time consuming task. In this work we investigate how the motion sensors embedded in mobile devices, such as smartphones, can facilitate camera control. In particular, we show two families of applications: automatic camera trigger control for jump photos and automatic playback speed control (video speed ramping) for action videos. Our approach uses joint devices: a remote camera takes a photo or a video of the scene and it is controlled by the motion sensor of a mobile device, either during or after recording. This allows casual users to achieve visually appealing effects with little effort, even for self portraits.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128886471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Investigating the effects of splitting detailed views in Overview+Detail interfaces 研究在Overview+Detail接口中拆分详细视图的效果
Houssem Saidi, M. Serrano, E. Dubois
While several techniques offer more than one detailed view in Overview+Detail (O+D) interfaces, the optimal number of detailed views has not been investigated. But the answer is not trivial: using a single detailed view offers a larger display size but only allows a sequential exploration of the overview; using several detailed views reduces the size of each view but allows a parallel exploration of the overview. In this paper we investigate the benefits of splitting the detailed view in O+D interfaces for working with very large graphs. We implemented an O+D interface where the overview is displayed on a large screen while 1, 2 or 4 split views are displayed on a tactile tablet. We experimentally evaluated the effect of the number of split views according to the number of nodes to connect. Using 4 split views is better than 1 and 2 for working on more than 2 nodes.
虽然有几种技术在Overview+Detail (O+D)接口中提供了多个详细视图,但详细视图的最佳数量尚未得到研究。但答案并不简单:使用单个详细视图提供了更大的显示尺寸,但只允许对概述进行顺序探索;使用多个详细视图减少了每个视图的大小,但允许并行浏览概览。在本文中,我们研究了在O+D接口中拆分详细视图以处理非常大的图的好处。我们实现了一个O+D界面,其中概览显示在大屏幕上,而1、2或4个分割视图显示在触觉平板上。我们通过实验评估了根据连接节点数量划分视图数量的影响。在处理多于2个节点时,使用4个拆分视图比1个和2个要好。
{"title":"Investigating the effects of splitting detailed views in Overview+Detail interfaces","authors":"Houssem Saidi, M. Serrano, E. Dubois","doi":"10.1145/2935334.2935341","DOIUrl":"https://doi.org/10.1145/2935334.2935341","url":null,"abstract":"While several techniques offer more than one detailed view in Overview+Detail (O+D) interfaces, the optimal number of detailed views has not been investigated. But the answer is not trivial: using a single detailed view offers a larger display size but only allows a sequential exploration of the overview; using several detailed views reduces the size of each view but allows a parallel exploration of the overview. In this paper we investigate the benefits of splitting the detailed view in O+D interfaces for working with very large graphs. We implemented an O+D interface where the overview is displayed on a large screen while 1, 2 or 4 split views are displayed on a tactile tablet. We experimentally evaluated the effect of the number of split views according to the number of nodes to connect. Using 4 split views is better than 1 and 2 for working on more than 2 nodes.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128649130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Discovering activities in your city using transitory search 使用临时搜索发现您所在城市的活动
J. Paay, J. Kjeldskov, M. Skov, Per M. Nielsen, Jon M. Pearce
Discovering activities in the city around you can be difficult with traditional search engines unless you know what you are looking for. Searching for inspiration on things to do requires a more open-ended and explorative approach. We introduce transitory search as a dynamic way of uncovering information about activities in the city around you that allows the user to start from a vague idea of what they are interested in, and iteratively modify their search using slider continuums to discover best-fit results. We present the design of a smartphone app exemplifying the idea of transitory search and give results from a lab evaluation and a 4-week field deployment involving 15 people in two different cities. Our findings indicate that transitory search on a mobile device both supports discovering activities in the city and more interestingly helps users reflect on and shape their preferences in situ. We also found that ambiguous slider continuums work well as people happily form and refine individual interpretations of them.
除非你知道你在找什么,否则用传统的搜索引擎很难发现你周围城市的活动。寻找事情的灵感需要一种更加开放和探索的方法。我们引入临时搜索作为一种动态的方式来发现你周围城市的活动信息,允许用户从一个模糊的想法开始,他们感兴趣的是什么,并使用滑动连续体迭代地修改他们的搜索,以发现最适合的结果。我们展示了一个智能手机应用程序的设计,举例说明了临时搜索的想法,并给出了实验室评估的结果,以及在两个不同城市进行的为期4周的现场部署,涉及15人。我们的研究结果表明,移动设备上的临时搜索既可以帮助用户发现城市中的活动,更有趣的是,它可以帮助用户在现场反思和塑造他们的偏好。我们还发现,模棱两可的滑块连续体效果很好,因为人们乐于形成和完善对它们的个人解释。
{"title":"Discovering activities in your city using transitory search","authors":"J. Paay, J. Kjeldskov, M. Skov, Per M. Nielsen, Jon M. Pearce","doi":"10.1145/2935334.2935378","DOIUrl":"https://doi.org/10.1145/2935334.2935378","url":null,"abstract":"Discovering activities in the city around you can be difficult with traditional search engines unless you know what you are looking for. Searching for inspiration on things to do requires a more open-ended and explorative approach. We introduce transitory search as a dynamic way of uncovering information about activities in the city around you that allows the user to start from a vague idea of what they are interested in, and iteratively modify their search using slider continuums to discover best-fit results. We present the design of a smartphone app exemplifying the idea of transitory search and give results from a lab evaluation and a 4-week field deployment involving 15 people in two different cities. Our findings indicate that transitory search on a mobile device both supports discovering activities in the city and more interestingly helps users reflect on and shape their preferences in situ. We also found that ambiguous slider continuums work well as people happily form and refine individual interpretations of them.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124655407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Bitey: an exploration of tooth click gestures for hands-free user interface control Bitey:探索牙齿点击手势,实现免提用户界面控制
Daniel Ashbrook, Carlos E. Tejada, Dhwanit Mehta, Anthony Jiminez, Goudam Muralitharam, S. Gajendra, R. Tallents
We present Bitey, a subtle, wearable device for enabling input via tooth clicks. Based on a bone-conduction microphone worn just above the ears, Bitey recognizes the click sounds from up to five different pairs of teeth, allowing fully hands-free interface control. We explore the space of tooth input and show that Bitey allows for a high degree of accuracy in distinguishing between different tooth clicks, with up to 94% accuracy under laboratory conditions for five different tooth pairs. Finally, we illustrate Bitey's potential through two demonstration applications: a list navigation and selection interface and a keyboard input method.
我们展示了Bitey,一个微妙的,可穿戴的设备,可以通过牙齿点击输入。通过戴在耳朵上方的骨传导麦克风,Bitey可以识别多达五对不同牙齿发出的咔哒声,从而实现完全免提的接口控制。我们探索了牙齿输入的空间,并表明Bitey允许在区分不同的牙齿咔嗒声方面具有高度的准确性,在实验室条件下,五种不同的牙齿对的准确率高达94%。最后,我们通过两个演示应用程序来说明Bitey的潜力:一个列表导航和选择界面以及一个键盘输入法。
{"title":"Bitey: an exploration of tooth click gestures for hands-free user interface control","authors":"Daniel Ashbrook, Carlos E. Tejada, Dhwanit Mehta, Anthony Jiminez, Goudam Muralitharam, S. Gajendra, R. Tallents","doi":"10.1145/2935334.2935389","DOIUrl":"https://doi.org/10.1145/2935334.2935389","url":null,"abstract":"We present Bitey, a subtle, wearable device for enabling input via tooth clicks. Based on a bone-conduction microphone worn just above the ears, Bitey recognizes the click sounds from up to five different pairs of teeth, allowing fully hands-free interface control. We explore the space of tooth input and show that Bitey allows for a high degree of accuracy in distinguishing between different tooth clicks, with up to 94% accuracy under laboratory conditions for five different tooth pairs. Finally, we illustrate Bitey's potential through two demonstration applications: a list navigation and selection interface and a keyboard input method.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121123117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 43
Natural group binding and cross-display object movement methods for wearable devices 用于可穿戴设备的自然分组绑定和交叉显示对象移动方法
T. Jokela, Parisa Pour Rezaei, Kaisa Väänänen
As wearable devices become more popular, situations where there are multiple persons present with such devices will become commonplace. In these situations, wearable devices could support collaborative tasks and experiences between co-located persons through multi-user applications. We present an elicitation study that gathers from end users interaction methods for wearable devices for two common tasks in co-located interaction: group binding and cross-display object movement. We report a total of 154 methods collected from 30 participants. We categorize the methods based on the metaphor and modality of interaction, and discuss the strengths and weaknesses of each category based on qualitative and quantitative feedback given by the participants.
随着可穿戴设备的普及,多人同时使用这些设备的情况将变得司空见惯。在这些情况下,可穿戴设备可以通过多用户应用程序支持同地人员之间的协作任务和体验。我们提出了一项启发研究,收集了可穿戴设备的最终用户交互方法,用于共定位交互中的两个常见任务:组绑定和跨显示对象移动。我们报告了从30名参与者中收集的154种方法。我们根据互动的隐喻和形式对这些方法进行了分类,并根据参与者给出的定性和定量反馈讨论了每种方法的优缺点。
{"title":"Natural group binding and cross-display object movement methods for wearable devices","authors":"T. Jokela, Parisa Pour Rezaei, Kaisa Väänänen","doi":"10.1145/2935334.2935346","DOIUrl":"https://doi.org/10.1145/2935334.2935346","url":null,"abstract":"As wearable devices become more popular, situations where there are multiple persons present with such devices will become commonplace. In these situations, wearable devices could support collaborative tasks and experiences between co-located persons through multi-user applications. We present an elicitation study that gathers from end users interaction methods for wearable devices for two common tasks in co-located interaction: group binding and cross-display object movement. We report a total of 154 methods collected from 30 participants. We categorize the methods based on the metaphor and modality of interaction, and discuss the strengths and weaknesses of each category based on qualitative and quantitative feedback given by the participants.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126435343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
ScrollingHome: bringing image-based indoor navigation to smartwatches ScrollingHome:为智能手表带来基于图像的室内导航
Dirk Wenig, A. Steenbergen, Johannes Schöning, Brent J. Hecht, R. Malaka
Providing pedestrian navigation instructions on small screens is a challenging task due to limited screen space. As image-based approaches for navigation have been successfully proven to outperform map-based navigation on mobile devices, we propose to bring image-based navigation to smartwatches. We contribute a straightforward pipeline to easily create image-based indoor navigation instructions that allow users to freely navigate in indoor environments without any localization infrastructure and with minimal user input on the smartwatch. In a user study, we show that our approach outperforms the current state-of-the art application in terms of task completion time, perceived task load and perceived usability. In addition, we did not find an indication that there is a need to provide explicit directional instructions for image-based navigation on small screens.
由于屏幕空间有限,在小屏幕上提供行人导航指示是一项具有挑战性的任务。由于基于图像的导航方法已被成功证明在移动设备上优于基于地图的导航,我们建议将基于图像的导航引入智能手表。我们提供了一个简单的管道,可以轻松创建基于图像的室内导航指令,允许用户在室内环境中自由导航,而无需任何定位基础设施,并且在智能手表上的用户输入最少。在一项用户研究中,我们表明我们的方法在任务完成时间、感知任务负载和感知可用性方面优于当前最先进的应用程序。此外,我们没有发现有迹象表明需要在小屏幕上为基于图像的导航提供明确的定向说明。
{"title":"ScrollingHome: bringing image-based indoor navigation to smartwatches","authors":"Dirk Wenig, A. Steenbergen, Johannes Schöning, Brent J. Hecht, R. Malaka","doi":"10.1145/2935334.2935373","DOIUrl":"https://doi.org/10.1145/2935334.2935373","url":null,"abstract":"Providing pedestrian navigation instructions on small screens is a challenging task due to limited screen space. As image-based approaches for navigation have been successfully proven to outperform map-based navigation on mobile devices, we propose to bring image-based navigation to smartwatches. We contribute a straightforward pipeline to easily create image-based indoor navigation instructions that allow users to freely navigate in indoor environments without any localization infrastructure and with minimal user input on the smartwatch. In a user study, we show that our approach outperforms the current state-of-the art application in terms of task completion time, perceived task load and perceived usability. In addition, we did not find an indication that there is a need to provide explicit directional instructions for image-based navigation on small screens.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125745320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1