首页 > 最新文献

Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services最新文献

英文 中文
Gesture morpher: video-based retargeting of multi-touch interactions 手势变形:基于视频的多点触控交互重定向
Ramik Sadana, Y. Li
We present Gesture Morpher, a tool for prototyping and testing multi-touch interactions based on video recordings of target application behaviors, e.g., a sequence of screenshots recorded by a screen capture tool. Gesture Morpher extracts continuous behaviors from video recordings, such as transformations of UI content, and suggests a set of multi-touch interactions that are suitable for achieving these behaviors. Designers can easily test different interactions on a touch device with visual response that is automatically synthesized from the video recording, all without any programming. We discuss a range of multi-touch interaction scenarios Gesture Morpher supports, our method for extracting continuous interaction behaviors from video recordings, and techniques for associating touch-input with the output effect extracted from the videos.
我们介绍了Gesture Morpher,这是一个基于目标应用程序行为的视频记录(例如,由屏幕捕获工具记录的一系列屏幕截图)进行原型设计和测试多点触摸交互的工具。Gesture Morpher从视频记录中提取连续的行为,例如UI内容的转换,并建议一组适合实现这些行为的多点触摸交互。设计师可以轻松地在触控设备上测试不同的交互,并根据视频记录自动合成视觉响应,而无需任何编程。我们讨论了Gesture Morpher支持的一系列多点触摸交互场景,我们从视频记录中提取连续交互行为的方法,以及将触摸输入与从视频中提取的输出效果相关联的技术。
{"title":"Gesture morpher: video-based retargeting of multi-touch interactions","authors":"Ramik Sadana, Y. Li","doi":"10.1145/2935334.2935391","DOIUrl":"https://doi.org/10.1145/2935334.2935391","url":null,"abstract":"We present Gesture Morpher, a tool for prototyping and testing multi-touch interactions based on video recordings of target application behaviors, e.g., a sequence of screenshots recorded by a screen capture tool. Gesture Morpher extracts continuous behaviors from video recordings, such as transformations of UI content, and suggests a set of multi-touch interactions that are suitable for achieving these behaviors. Designers can easily test different interactions on a touch device with visual response that is automatically synthesized from the video recording, all without any programming. We discuss a range of multi-touch interaction scenarios Gesture Morpher supports, our method for extracting continuous interaction behaviors from video recordings, and techniques for associating touch-input with the output effect extracted from the videos.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132520983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Playing on AREEF: evaluation of an underwater augmented reality game for kids 玩在AREEF:评估水下增强现实游戏的孩子
L. Oppermann, Lisa Blum, Marius Shekow
This paper reports on a study of AREEF, a multi-player Underwater Augmented Reality (UWAR) experience for swimming pools. Using off-the-shelf components combined with a custom made waterproof case and an innovative game concept, AREEF puts computer game technology to use for recreational and educational purposes in and under water. After an experience overview, we present evidence gained from a user-centred design-process including a pilot study with 3 kids and a final evaluation with 36 kids. Our discussion covers technical findings regarding marker placement, tracking, and device handling, as well as design related issues like virtual object placement and the need for extremely obvious user interaction and feedback when staging a mobile underwater experience.
本文报道了一种基于泳池的多人水下增强现实(UWAR)体验——AREEF的研究。AREEF使用现成的组件,结合定制的防水外壳和创新的游戏概念,将电脑游戏技术用于水中和水下的娱乐和教育目的。在经验概述之后,我们展示了从以用户为中心的设计过程中获得的证据,包括对3个孩子的初步研究和对36个孩子的最终评估。我们的讨论涵盖了有关标记放置,跟踪和设备处理的技术发现,以及设计相关问题,如虚拟对象放置以及在进行移动水下体验时需要极其明显的用户交互和反馈。
{"title":"Playing on AREEF: evaluation of an underwater augmented reality game for kids","authors":"L. Oppermann, Lisa Blum, Marius Shekow","doi":"10.1145/2935334.2935368","DOIUrl":"https://doi.org/10.1145/2935334.2935368","url":null,"abstract":"This paper reports on a study of AREEF, a multi-player Underwater Augmented Reality (UWAR) experience for swimming pools. Using off-the-shelf components combined with a custom made waterproof case and an innovative game concept, AREEF puts computer game technology to use for recreational and educational purposes in and under water. After an experience overview, we present evidence gained from a user-centred design-process including a pilot study with 3 kids and a final evaluation with 36 kids. Our discussion covers technical findings regarding marker placement, tracking, and device handling, as well as design related issues like virtual object placement and the need for extremely obvious user interaction and feedback when staging a mobile underwater experience.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127731311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Understanding call logs of smartphone users for making future calls 了解智能手机用户的通话记录,以便将来进行通话
Mehwish Nasim, A. Rextin, Numair Khan, Muhammad Muddasir Malik
In this measurement study, we analyze whether mobile phone users exhibit temporal regularity in their mobile communication. To this end, we collected a mobile phone usage dataset from a developing country -- Pakistan. The data consists of 783 users and 229, 450 communication events. We found a number of interesting patterns both at the aggregate level and at dyadic level in the data. Some interesting results include: the number of calls to different alters consistently follow the rank-size rule; a communication event between an ego-alter(user-contact) pair greatly increases the chances of another communication event; certain ego-alter pairs tend to communicate more over weekends; ego-alter pairs exhibit autocorrelation in various time quantum. Identifying such idiosyncrasies in the ego-alter communication can help improve the calling experience of smartphone users by automatically (smartly) sorting the call log without any manual intervention.
在这个测量研究中,我们分析了手机用户在他们的移动通信中是否表现出时间规律性。为此,我们收集了一个发展中国家——巴基斯坦的手机使用数据集。这些数据包括783个用户和229,450个通信事件。我们在数据的聚合级别和二元级别上发现了许多有趣的模式。一些有趣的结果包括:对不同更改的调用数量始终遵循rank-size规则;自我-改变(用户-联系人)对之间的通信事件大大增加了另一个通信事件的机会;某些自我改变者倾向于在周末进行更多的交流;自我改变对在不同的时间量子中表现出自相关。识别自我改变通信中的这种特质可以帮助智能手机用户通过自动(智能)排序呼叫记录而无需任何人工干预来改善呼叫体验。
{"title":"Understanding call logs of smartphone users for making future calls","authors":"Mehwish Nasim, A. Rextin, Numair Khan, Muhammad Muddasir Malik","doi":"10.1145/2935334.2935350","DOIUrl":"https://doi.org/10.1145/2935334.2935350","url":null,"abstract":"In this measurement study, we analyze whether mobile phone users exhibit temporal regularity in their mobile communication. To this end, we collected a mobile phone usage dataset from a developing country -- Pakistan. The data consists of 783 users and 229, 450 communication events. We found a number of interesting patterns both at the aggregate level and at dyadic level in the data. Some interesting results include: the number of calls to different alters consistently follow the rank-size rule; a communication event between an ego-alter(user-contact) pair greatly increases the chances of another communication event; certain ego-alter pairs tend to communicate more over weekends; ego-alter pairs exhibit autocorrelation in various time quantum. Identifying such idiosyncrasies in the ego-alter communication can help improve the calling experience of smartphone users by automatically (smartly) sorting the call log without any manual intervention.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134183632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
In situ CAD capture 现场CAD捕获
Aditya Sankar, S. Seitz
We present an interactive system to capture CAD-like 3D models of indoor scenes, on a mobile device. To overcome sensory and computational limitations of the mobile platform, we employ an in situ, semi-automated approach and harness the user's high-level knowledge of the scene to assist the reconstruction and modeling algorithms. The modeling proceeds in two stages: (1) The user captures the 3D shape and dimensions of the room. (2) The user then uses voice commands and an augmented reality sketching interface to insert objects of interest, such as furniture, artwork, doors and windows. Our system recognizes the sketches and add a corresponding 3D model into the scene at the appropriate location. The key contributions of this work are the design of a multi-modal user interface to effectively capture the user's semantic understanding of the scene and the underlying algorithms that process the input to produce useful reconstructions.
我们提出了一个交互式系统,可以在移动设备上捕获室内场景的类似cad的3D模型。为了克服移动平台的感官和计算限制,我们采用了一种原位、半自动化的方法,并利用用户对场景的高级知识来协助重建和建模算法。建模分两个阶段进行:(1)用户捕捉房间的三维形状和尺寸。(2)然后,用户使用语音命令和增强现实素描界面来插入感兴趣的物体,如家具、艺术品、门窗。我们的系统识别草图,并在适当的位置将相应的3D模型添加到场景中。这项工作的关键贡献是设计了一个多模态用户界面,以有效地捕获用户对场景的语义理解,以及处理输入以产生有用重建的底层算法。
{"title":"In situ CAD capture","authors":"Aditya Sankar, S. Seitz","doi":"10.1145/2935334.2935337","DOIUrl":"https://doi.org/10.1145/2935334.2935337","url":null,"abstract":"We present an interactive system to capture CAD-like 3D models of indoor scenes, on a mobile device. To overcome sensory and computational limitations of the mobile platform, we employ an in situ, semi-automated approach and harness the user's high-level knowledge of the scene to assist the reconstruction and modeling algorithms. The modeling proceeds in two stages: (1) The user captures the 3D shape and dimensions of the room. (2) The user then uses voice commands and an augmented reality sketching interface to insert objects of interest, such as furniture, artwork, doors and windows. Our system recognizes the sketches and add a corresponding 3D model into the scene at the appropriate location. The key contributions of this work are the design of a multi-modal user interface to effectively capture the user's semantic understanding of the scene and the underlying algorithms that process the input to produce useful reconstructions.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131342764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Session details: Games & learning 会议详情:游戏与学习
C. Santoro
{"title":"Session details: Games & learning","authors":"C. Santoro","doi":"10.1145/3254092","DOIUrl":"https://doi.org/10.1145/3254092","url":null,"abstract":"","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123285634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Input techniques 会话细节:输入技巧
A. Lucero
{"title":"Session details: Input techniques","authors":"A. Lucero","doi":"10.1145/3254088","DOIUrl":"https://doi.org/10.1145/3254088","url":null,"abstract":"","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115724265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Supporting visual impairment 会议细节:支持视力障碍
F. Paternò
{"title":"Session details: Supporting visual impairment","authors":"F. Paternò","doi":"10.1145/3254086","DOIUrl":"https://doi.org/10.1145/3254086","url":null,"abstract":"","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124142642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Time to exercise!: an aide-memoire stroke app for post-stroke arm rehabilitation 是时候锻炼了!:中风后手臂康复的辅助记忆中风应用程序
Nicholas Micallef, L. Baillie, Stephen Uzor
A majority of Stroke survivors have an arm impairment (up to 80%), which persists over the long term (>12 months). Physiotherapy experts believe that a rehabilitation Aide-Memoire could help these patients [25]. Hence, we designed, with the input of physiotherapists, Stroke experts and former Stroke patients, the Aide-Memoire Stroke (AIMS) App to help them remember to exercise more frequently. We evaluated its use in a controlled field evaluation on a smartphone, tablet and smartwatch. Since one of the main features of the app is to remind Stroke survivors to exercise we also investigated reminder modalities (i.e., visual, vibrate, audio, speech). One key finding is that Stroke survivors opted for a combination of modalities to remind them to conduct their exercises. Also, Stroke survivors seem to prefer smartphones compared to other mobile devices due to their ease of use, usability, familiarity and being easier to handle with one arm.
大多数中风幸存者都有手臂损伤(高达80%),这种情况会持续很长时间(>12个月)。物理治疗专家认为,康复备忘录可以帮助这些患者[25]。因此,在物理治疗师、中风专家和前中风患者的参与下,我们设计了中风记忆助手(AIMS)应用程序,帮助他们记住更频繁的锻炼。我们在智能手机、平板电脑和智能手表的受控现场评估中评估了它的使用情况。由于该应用程序的主要功能之一是提醒中风幸存者锻炼,我们还研究了提醒方式(即视觉,振动,音频,语音)。一个重要的发现是,中风幸存者选择了多种方式的组合来提醒他们进行锻炼。此外,与其他移动设备相比,中风幸存者似乎更喜欢智能手机,因为智能手机易于使用、可用、熟悉,而且单臂操作更容易。
{"title":"Time to exercise!: an aide-memoire stroke app for post-stroke arm rehabilitation","authors":"Nicholas Micallef, L. Baillie, Stephen Uzor","doi":"10.1145/2935334.2935338","DOIUrl":"https://doi.org/10.1145/2935334.2935338","url":null,"abstract":"A majority of Stroke survivors have an arm impairment (up to 80%), which persists over the long term (>12 months). Physiotherapy experts believe that a rehabilitation Aide-Memoire could help these patients [25]. Hence, we designed, with the input of physiotherapists, Stroke experts and former Stroke patients, the Aide-Memoire Stroke (AIMS) App to help them remember to exercise more frequently. We evaluated its use in a controlled field evaluation on a smartphone, tablet and smartwatch. Since one of the main features of the app is to remind Stroke survivors to exercise we also investigated reminder modalities (i.e., visual, vibrate, audio, speech). One key finding is that Stroke survivors opted for a combination of modalities to remind them to conduct their exercises. Also, Stroke survivors seem to prefer smartphones compared to other mobile devices due to their ease of use, usability, familiarity and being easier to handle with one arm.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115433629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
NavCog: a navigational cognitive assistant for the blind NavCog:盲人的导航认知助手
D. Ahmetovic, Cole Gleason, Chengxiong Ruan, Kris M. Kitani, Hironobu Takagi, C. Asakawa
Turn-by-turn navigation is a useful paradigm for assisting people with visual impairments during mobility as it reduces the cognitive load of having to simultaneously sense, localize and plan. To realize such a system, it is necessary to be able to automatically localize the user with sufficient accuracy, provide timely and efficient instructions and have the ability to easily deploy the system to new spaces. We propose a smartphone-based system that provides turn-by-turn navigation assistance based on accurate real-time localization over large spaces. In addition to basic navigation capabilities, our system also informs the user about nearby points-of-interest (POI) and accessibility issues (e.g., stairs ahead). After deploying the system on a university campus across several indoor and outdoor areas, we evaluated it with six blind subjects and showed that our system is capable of guiding visually impaired users in complex and unfamiliar environments.
逐向导航是一种有用的范例,可以帮助视力受损的人在行动中获得帮助,因为它减少了同时感知、定位和规划的认知负荷。要实现这样的系统,必须能够以足够的精度自动定位用户,提供及时有效的指令,并具有轻松将系统部署到新空间的能力。我们提出了一种基于智能手机的系统,该系统可以在大空间内提供基于精确实时定位的转弯导航辅助。除了基本的导航功能,我们的系统还通知用户附近的兴趣点(POI)和可访问性问题(例如,前面的楼梯)。在将该系统部署到大学校园的几个室内和室外区域后,我们对六名盲人受试者进行了评估,并表明我们的系统能够在复杂和陌生的环境中引导视障用户。
{"title":"NavCog: a navigational cognitive assistant for the blind","authors":"D. Ahmetovic, Cole Gleason, Chengxiong Ruan, Kris M. Kitani, Hironobu Takagi, C. Asakawa","doi":"10.1145/2935334.2935361","DOIUrl":"https://doi.org/10.1145/2935334.2935361","url":null,"abstract":"Turn-by-turn navigation is a useful paradigm for assisting people with visual impairments during mobility as it reduces the cognitive load of having to simultaneously sense, localize and plan. To realize such a system, it is necessary to be able to automatically localize the user with sufficient accuracy, provide timely and efficient instructions and have the ability to easily deploy the system to new spaces. We propose a smartphone-based system that provides turn-by-turn navigation assistance based on accurate real-time localization over large spaces. In addition to basic navigation capabilities, our system also informs the user about nearby points-of-interest (POI) and accessibility issues (e.g., stairs ahead). After deploying the system on a university campus across several indoor and outdoor areas, we evaluated it with six blind subjects and showed that our system is capable of guiding visually impaired users in complex and unfamiliar environments.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120893281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 206
What can I say?: addressing user experience challenges of a mobile voice user interface for accessibility 我能说什么呢?:解决移动语音用户界面可访问性的用户体验挑战
E. Corbett, Astrid Weber
Voice interactions on mobile phones are most often used to augment or supplement touch based interactions for users' convenience. However, for people with limited hand dexterity caused by various forms of motor-impairments voice interactions can have a significant impact and in some cases even enable independent interaction with a mobile device for the first time. For these users, a Mobile Voice User Interface (M-VUI), which allows for completely hands-free, voice only interaction would provide a high level of accessibility and independence. Implementing such a system requires research to address long standing usability challenges introduced by voice interactions that negatively affect user experience due to difficulty learning and discovering voice commands. In this paper we address these concerns reporting on research conducted to improve the visibility and learnability of voice commands of a M-VUI application being developed on the Android platform. Our research confirmed long standing challenges with voice interactions while exploring several methods for improving the onboarding and learning experience. Based on our findings we offer a set of implications for the design of M-VUIs.
手机上的语音交互最常用于增强或补充基于触摸的交互,以方便用户。然而,对于由于各种形式的运动障碍而导致手灵巧程度有限的人来说,语音交互可能会产生重大影响,在某些情况下甚至可以第一次与移动设备进行独立交互。对于这些用户,移动语音用户界面(M-VUI)允许完全免提,仅语音交互,将提供高度的可访问性和独立性。实现这样的系统需要研究解决语音交互带来的长期可用性挑战,语音交互由于难以学习和发现语音命令而对用户体验产生负面影响。在本文中,我们解决了这些问题,报告了为提高Android平台上开发的M-VUI应用程序语音命令的可见性和可学习性而进行的研究。我们的研究证实了语音交互长期存在的挑战,同时探索了几种改善入职和学习体验的方法。基于我们的研究结果,我们为m - vu的设计提供了一系列启示。
{"title":"What can I say?: addressing user experience challenges of a mobile voice user interface for accessibility","authors":"E. Corbett, Astrid Weber","doi":"10.1145/2935334.2935386","DOIUrl":"https://doi.org/10.1145/2935334.2935386","url":null,"abstract":"Voice interactions on mobile phones are most often used to augment or supplement touch based interactions for users' convenience. However, for people with limited hand dexterity caused by various forms of motor-impairments voice interactions can have a significant impact and in some cases even enable independent interaction with a mobile device for the first time. For these users, a Mobile Voice User Interface (M-VUI), which allows for completely hands-free, voice only interaction would provide a high level of accessibility and independence. Implementing such a system requires research to address long standing usability challenges introduced by voice interactions that negatively affect user experience due to difficulty learning and discovering voice commands. In this paper we address these concerns reporting on research conducted to improve the visibility and learnability of voice commands of a M-VUI application being developed on the Android platform. Our research confirmed long standing challenges with voice interactions while exploring several methods for improving the onboarding and learning experience. Based on our findings we offer a set of implications for the design of M-VUIs.","PeriodicalId":420843,"journal":{"name":"Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121329920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 121
期刊
Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1