首页 > 最新文献

Proceedings of the 22nd International Conference on Intelligent User Interfaces最新文献

英文 中文
SupportingTrust in Autonomous Driving 支持对自动驾驶的信任
Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025198
Renate Häuslschmid, Max von Bülow, Bastian Pfleging, A. Butz
Autonomous cars will likely hit the market soon, but trust into such a technology is one of the big discussion points in the public debate. Drivers who have always been in complete control of their car are expected to willingly hand over control and blindly trust a technology that could kill them. We argue that trust in autonomous driving can be increased by means of a driver interface that visualizes the car's interpretation of the current situation and its corresponding actions. To verify this, we compared different visualizations in a user study, overlaid to a driving scene: (1) a chauffeur avatar, (2) a world in miniature, and (3) a display of the car's indicators as the baseline. The world in miniature visualization increased trust the most. The human-like chauffeur avatar can also increase trust, however, we did not find a significant difference between the chauffeur and the baseline.
自动驾驶汽车可能很快就会进入市场,但对这种技术的信任是公众辩论的主要讨论点之一。那些一直完全控制着自己汽车的司机被期望心甘情愿地交出控制权,盲目信任一项可能会杀死他们的技术。我们认为,可以通过驾驶员界面来增加对自动驾驶的信任,该界面可以可视化汽车对当前情况的解释及其相应的行动。为了验证这一点,我们在用户研究中比较了不同的可视化效果,叠加到驾驶场景中:(1)司机的化身,(2)微型世界,(3)汽车指示器的显示作为基线。微缩可视化的世界最能增加信任。人形司机头像也可以增加信任,然而,我们没有发现司机与基线之间有显著差异。
{"title":"SupportingTrust in Autonomous Driving","authors":"Renate Häuslschmid, Max von Bülow, Bastian Pfleging, A. Butz","doi":"10.1145/3025171.3025198","DOIUrl":"https://doi.org/10.1145/3025171.3025198","url":null,"abstract":"Autonomous cars will likely hit the market soon, but trust into such a technology is one of the big discussion points in the public debate. Drivers who have always been in complete control of their car are expected to willingly hand over control and blindly trust a technology that could kill them. We argue that trust in autonomous driving can be increased by means of a driver interface that visualizes the car's interpretation of the current situation and its corresponding actions. To verify this, we compared different visualizations in a user study, overlaid to a driving scene: (1) a chauffeur avatar, (2) a world in miniature, and (3) a display of the car's indicators as the baseline. The world in miniature visualization increased trust the most. The human-like chauffeur avatar can also increase trust, however, we did not find a significant difference between the chauffeur and the baseline.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124435966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 96
CogniLearn CogniLearn
Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025213
Srujana Gattupalli, Dylan Ebert, Michalis Papakostas, F. Makedon, V. Athitsos
This paper proposes a novel system for assessing physical exercises specifically designed for cognitive behavior monitoring. The proposed system provides decision support to experts for helping with early childhood development. Our work is based on the well-established framework of Head-Toes-Knees-Shoulders (HTKS) that is known for its sufficient psychometric properties and its ability to assess cognitive dysfunctions. HTKS serves as a useful measure for behavioral self-regulation. Our system, CogniLearn, automates capturing and motion analysis of users performing the HTKS game and provides detailed evaluations using state-of-the-art computer vision and deep learning based techniques for activity recognition and evaluation. The proposed system is supported by an intuitive and specifically designed user interface that can help human experts to cross-validate and/or refine their diagnosis. To evaluate our system, we created a novel dataset, that we made open to the public to encourage further experimentation. The dataset consists of 15 subjects performing 4 different variations of the HTKS task and contains in total more than 60,000 RGB frames, of which 4,443 are fully annotated.
{"title":"CogniLearn","authors":"Srujana Gattupalli, Dylan Ebert, Michalis Papakostas, F. Makedon, V. Athitsos","doi":"10.1145/3025171.3025213","DOIUrl":"https://doi.org/10.1145/3025171.3025213","url":null,"abstract":"This paper proposes a novel system for assessing physical exercises specifically designed for cognitive behavior monitoring. The proposed system provides decision support to experts for helping with early childhood development. Our work is based on the well-established framework of Head-Toes-Knees-Shoulders (HTKS) that is known for its sufficient psychometric properties and its ability to assess cognitive dysfunctions. HTKS serves as a useful measure for behavioral self-regulation. Our system, CogniLearn, automates capturing and motion analysis of users performing the HTKS game and provides detailed evaluations using state-of-the-art computer vision and deep learning based techniques for activity recognition and evaluation. The proposed system is supported by an intuitive and specifically designed user interface that can help human experts to cross-validate and/or refine their diagnosis. To evaluate our system, we created a novel dataset, that we made open to the public to encourage further experimentation. The dataset consists of 15 subjects performing 4 different variations of the HTKS task and contains in total more than 60,000 RGB frames, of which 4,443 are fully annotated.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117120272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Deep Sequential Recommendation for Personalized Adaptive User Interfaces 个性化自适应用户界面的深度顺序推荐
Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025207
Harold Soh, S. Sanner, Madeleine White, G. Jamieson
Adaptive user-interfaces (AUIs) can enhance the usability of complex software by providing real-time contextual adaptation and assistance. Ideally, AUIs should be personalized and versatile, i.e., able to adapt to each user who may perform a variety of complex tasks. But this is difficult to achieve with many interaction elements when data-per-user is sparse. In this paper, we propose an architecture for personalized AUIs that leverages upon developments in (1) deep learning, particularly gated recurrent units, to efficiently learn user interaction patterns, (2) collaborative filtering techniques that enable sharing of data among users, and (3) fast approximate nearest-neighbor methods in Euclidean spaces for quick UI control and/or content recommendations. Specifically, interaction histories are embedded in a learned space along with users and interaction elements; this allows the AUI to query and recommend likely next actions based on similar usage patterns across the user base. In a comparative evaluation on user-interface, web-browsing and e-learning datasets, the deep recurrent neural-network (DRNN) outperforms state-of-the-art tensor-factorization and metric embedding methods.
自适应用户界面(AUIs)可以通过提供实时上下文适应和帮助来增强复杂软件的可用性。理想情况下,ui应该是个性化的和通用的,也就是说,能够适应可能执行各种复杂任务的每个用户。但是,当每个用户的数据稀疏时,这很难实现许多交互元素。在本文中,我们提出了一种个性化UI的架构,该架构利用了以下方面的发展:(1)深度学习,特别是门控循环单元,以有效地学习用户交互模式;(2)协作过滤技术,使用户之间能够共享数据;(3)欧几里得空间中的快速近似近邻方法,用于快速UI控制和/或内容推荐。具体来说,交互历史与用户和交互元素一起嵌入到学习空间中;这允许AUI根据用户群的类似使用模式查询并推荐可能的下一步操作。在用户界面、网页浏览和电子学习数据集的比较评估中,深度递归神经网络(DRNN)优于最先进的张量分解和度量嵌入方法。
{"title":"Deep Sequential Recommendation for Personalized Adaptive User Interfaces","authors":"Harold Soh, S. Sanner, Madeleine White, G. Jamieson","doi":"10.1145/3025171.3025207","DOIUrl":"https://doi.org/10.1145/3025171.3025207","url":null,"abstract":"Adaptive user-interfaces (AUIs) can enhance the usability of complex software by providing real-time contextual adaptation and assistance. Ideally, AUIs should be personalized and versatile, i.e., able to adapt to each user who may perform a variety of complex tasks. But this is difficult to achieve with many interaction elements when data-per-user is sparse. In this paper, we propose an architecture for personalized AUIs that leverages upon developments in (1) deep learning, particularly gated recurrent units, to efficiently learn user interaction patterns, (2) collaborative filtering techniques that enable sharing of data among users, and (3) fast approximate nearest-neighbor methods in Euclidean spaces for quick UI control and/or content recommendations. Specifically, interaction histories are embedded in a learned space along with users and interaction elements; this allows the AUI to query and recommend likely next actions based on similar usage patterns across the user base. In a comparative evaluation on user-interface, web-browsing and e-learning datasets, the deep recurrent neural-network (DRNN) outperforms state-of-the-art tensor-factorization and metric embedding methods.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121392088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Adaptive View Management for Drone Teleoperation in Complex 3D Structures 复杂三维结构下无人机遥操作的自适应视图管理
Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025179
J. Thomason, P. Ratsamee, K. Kiyokawa, Pakpoom Kriangkomol, J. Orlosky, T. Mashita, Yuuki Uranishi, H. Takemura
Drone navigation in complex environments poses many problems to teleoperators. Especially in 3D structures like buildings or tunnels, viewpoints are often limited to the drone's current camera view, nearby objects can be collision hazards, and frequent occlusion can hinder accurate manipulation. To address these issues, we have developed a novel interface for teleoperation that provides a user with environment-adaptive viewpoints that are automatically configured to improve safety and smooth user operation. This real-time adaptive viewpoint system takes robot position, orientation, and 3D pointcloud information into account to modify user-viewpoint to maximize visibility. Our prototype uses simultaneous localization and mapping (SLAM) based reconstruction with an omnidirectional camera and we use resulting models as well as simulations in a series of preliminary experiments testing navigation of various structures. Results suggest that automatic viewpoint generation can outperform first and third-person view interfaces for virtual teleoperators in terms of ease of control and accuracy of robot operation.
复杂环境下的无人机导航给远程操作者带来了诸多难题。特别是在建筑物或隧道等3D结构中,视点通常仅限于无人机当前的相机视图,附近的物体可能存在碰撞危险,频繁的遮挡可能会妨碍准确的操作。为了解决这些问题,我们开发了一种新的远程操作界面,它为用户提供了自适应环境的视点,这些视点可以自动配置,以提高用户操作的安全性和流畅性。该实时自适应视点系统考虑了机器人的位置、方向和三维点云信息来修改用户的视点,以最大限度地提高可视性。我们的原型使用了基于全向相机的同步定位和测绘(SLAM)重建,我们在一系列测试各种结构导航的初步实验中使用了得到的模型和模拟。结果表明,自动视点生成在机器人操作的易控制性和准确性方面优于第一人称和第三人称视点界面。
{"title":"Adaptive View Management for Drone Teleoperation in Complex 3D Structures","authors":"J. Thomason, P. Ratsamee, K. Kiyokawa, Pakpoom Kriangkomol, J. Orlosky, T. Mashita, Yuuki Uranishi, H. Takemura","doi":"10.1145/3025171.3025179","DOIUrl":"https://doi.org/10.1145/3025171.3025179","url":null,"abstract":"Drone navigation in complex environments poses many problems to teleoperators. Especially in 3D structures like buildings or tunnels, viewpoints are often limited to the drone's current camera view, nearby objects can be collision hazards, and frequent occlusion can hinder accurate manipulation. To address these issues, we have developed a novel interface for teleoperation that provides a user with environment-adaptive viewpoints that are automatically configured to improve safety and smooth user operation. This real-time adaptive viewpoint system takes robot position, orientation, and 3D pointcloud information into account to modify user-viewpoint to maximize visibility. Our prototype uses simultaneous localization and mapping (SLAM) based reconstruction with an omnidirectional camera and we use resulting models as well as simulations in a series of preliminary experiments testing navigation of various structures. Results suggest that automatic viewpoint generation can outperform first and third-person view interfaces for virtual teleoperators in terms of ease of control and accuracy of robot operation.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121659227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
DyFAV: Dynamic Feature Selection and Voting for Real-time Recognition of Fingerspelled Alphabet using Wearables DyFAV:动态特征选择和投票,用于使用可穿戴设备实时识别手指拼写字母
Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025216
Prajwal Paudyal, Junghyo Lee, Ayan Banerjee, S. Gupta
Recent research has shown that reliable recognition of sign language words and phrases using user-friendly and non-invasive armbands is feasible and desirable. This work provides an analysis and implementation of including fingerspelling recognition (FR) in such systems, which is a much harder problem due to lack of distinctive hand movements. A novel algorithm called DyFAV (Dynamic Feature Selection and Voting) is proposed for this purpose that exploits the fact that fingerspelling has a finite corpus (26 letters for ASL). The system uses an independent multiple agent voting approach to identify letters with high accuracy. The independent voting of the agents ensures that the algorithm is highly parallelizable and thus recognition times can be kept low to suit real-time mobile applications. The results are demonstrated on the entire ASL alphabet corpus for nine people with limited training and average recognition accuracy of 95.36% is achieved which is better than the state-of-art for armband sensors. The mobile, non-invasive, and real time nature of the technology is demonstrated by evaluating performance on various types of Android phones and remote server configurations.
最近的研究表明,使用用户友好和非侵入性的臂章来可靠地识别手语单词和短语是可行和可取的。这项工作提供了一个分析和实现在这样的系统中包括指纹拼写识别(FR),这是一个更困难的问题,由于缺乏独特的手部运动。为此提出了一种称为DyFAV(动态特征选择和投票)的新算法,该算法利用了手指拼写具有有限语料库(ASL为26个字母)的事实。该系统采用独立的多代理投票方式进行字母识别,具有较高的准确率。代理的独立投票保证了算法的高度并行性,从而可以保持较低的识别时间以适应实时移动应用。在有限训练的情况下,对9个人的整个美国手语字母语料库进行了验证,平均识别准确率达到95.36%,优于目前的臂带传感器。通过在各种类型的Android手机和远程服务器配置上评估性能,证明了该技术的移动性、非侵入性和实时性。
{"title":"DyFAV: Dynamic Feature Selection and Voting for Real-time Recognition of Fingerspelled Alphabet using Wearables","authors":"Prajwal Paudyal, Junghyo Lee, Ayan Banerjee, S. Gupta","doi":"10.1145/3025171.3025216","DOIUrl":"https://doi.org/10.1145/3025171.3025216","url":null,"abstract":"Recent research has shown that reliable recognition of sign language words and phrases using user-friendly and non-invasive armbands is feasible and desirable. This work provides an analysis and implementation of including fingerspelling recognition (FR) in such systems, which is a much harder problem due to lack of distinctive hand movements. A novel algorithm called DyFAV (Dynamic Feature Selection and Voting) is proposed for this purpose that exploits the fact that fingerspelling has a finite corpus (26 letters for ASL). The system uses an independent multiple agent voting approach to identify letters with high accuracy. The independent voting of the agents ensures that the algorithm is highly parallelizable and thus recognition times can be kept low to suit real-time mobile applications. The results are demonstrated on the entire ASL alphabet corpus for nine people with limited training and average recognition accuracy of 95.36% is achieved which is better than the state-of-art for armband sensors. The mobile, non-invasive, and real time nature of the technology is demonstrated by evaluating performance on various types of Android phones and remote server configurations.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126326955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
"How May I Help You?": Modeling Twitter Customer ServiceConversations Using Fine-Grained Dialogue Acts “我能为您做些什么?”:使用细粒度对话行为建模Twitter客户服务会话
Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025191
Shereen Oraby, Pritam Gundecha, J. Mahmud, Mansurul Bhuiyan, R. Akkiraju
Given the increasing popularity of customer service dialogue on Twitter, analysis of conversation data is essential to understand trends in customer and agent behavior for the purpose of automating customer service interactions. In this work, we develop a novel taxonomy of fine-grained "dialogue acts" frequently observed in customer service, showcasing acts that are more suited to the domain than the more generic existing taxonomies. Using a sequential SVM-HMM model, we model conversation flow, predicting the dialogue act of a given turn in real-time. We characterize differences between customer and agent behavior in Twitter customer service conversations, and investigate the effect of testing our system on different customer service industries. Finally, we use a data-driven approach to predict important conversation outcomes: customer satisfaction, customer frustration, and overall problem resolution. We show that the type and location of certain dialogue acts in a conversation have a significant effect on the probability of desirable and undesirable outcomes, and present actionable rules based on our findings. The patterns and rules we derive can be used as guidelines for outcome-driven automated customer service platforms.
鉴于Twitter上的客户服务对话越来越受欢迎,为了实现客户服务交互的自动化,分析对话数据对于了解客户和座席行为的趋势至关重要。在这项工作中,我们开发了一种新的细粒度“对话行为”分类法,这种分类法经常在客户服务中观察到,它展示了比更通用的现有分类法更适合该领域的行为。使用序列SVM-HMM模型,对会话流进行建模,实时预测给定回合的对话行为。我们描述了Twitter客户服务对话中客户和座席行为之间的差异,并调查了测试我们的系统对不同客户服务行业的影响。最后,我们使用数据驱动的方法来预测重要的对话结果:客户满意度、客户挫败感和整体问题解决。我们表明,对话中某些对话行为的类型和位置对理想和不理想结果的概率有显著影响,并根据我们的发现提出了可操作的规则。我们得出的模式和规则可以用作结果驱动的自动化客户服务平台的指导方针。
{"title":"\"How May I Help You?\": Modeling Twitter Customer ServiceConversations Using Fine-Grained Dialogue Acts","authors":"Shereen Oraby, Pritam Gundecha, J. Mahmud, Mansurul Bhuiyan, R. Akkiraju","doi":"10.1145/3025171.3025191","DOIUrl":"https://doi.org/10.1145/3025171.3025191","url":null,"abstract":"Given the increasing popularity of customer service dialogue on Twitter, analysis of conversation data is essential to understand trends in customer and agent behavior for the purpose of automating customer service interactions. In this work, we develop a novel taxonomy of fine-grained \"dialogue acts\" frequently observed in customer service, showcasing acts that are more suited to the domain than the more generic existing taxonomies. Using a sequential SVM-HMM model, we model conversation flow, predicting the dialogue act of a given turn in real-time. We characterize differences between customer and agent behavior in Twitter customer service conversations, and investigate the effect of testing our system on different customer service industries. Finally, we use a data-driven approach to predict important conversation outcomes: customer satisfaction, customer frustration, and overall problem resolution. We show that the type and location of certain dialogue acts in a conversation have a significant effect on the probability of desirable and undesirable outcomes, and present actionable rules based on our findings. The patterns and rules we derive can be used as guidelines for outcome-driven automated customer service platforms.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126017526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Interaction Design for Rehabiliation 康复交互设计
Pub Date : 2017-03-07 DOI: 10.1145/3025171.3026365
P. Markopoulos
Well-known trends pertaining to the aging of population and the rising costs of healthcare motivate the development of rehabilitation technology. There is a considerable body of work in this area including efforts to make serious games, virtual reality and robotic applications. While innovative technologies have been introduced over the years, and often researchers produce promising experimental results, these technologies have not yet delivered the anticipated benefits. The causes for this apparent failure are evident when looking a closer look at the case of stroke rehabilitation, which is one of the heaviest researched topics for developing rehabilitation technologies. It is argued that improvements should be sought by centering the design on an understanding of patient needs, allowing patients, therapists and care givers in general to personalize solutions to the need of patients, effective feedback and motivation strategies to be implemented, and an in depth understanding of the socio-technical system in which the rehabilitation technology will be embedded. These are classic challenges that human computer interaction (HCI) researchers have been dealing with for years, which is why the field of rehabilitation technology requires considerable input from HCI researchers, and which explains the growing number of relevant HCI publications pertaining to rehabilitation. The talk reviews related research carried out at the Eindhoven University of Technology together with collaborating institutes, which has examined the value of tangible user interfaces and embodied interaction in rehabilitation, how designing playful interactions or games with a functional purpose., feedback design. I shall discuss the work we have done to develop rehabilitation technologies for the TagTrrainer system in the doctoral research of Daniel Tetteroo [2,3,4] and the explorations on wearable solutions in the doctoral research of Wang Qi.[5,6]. With our research being design driven and explorative, I will discuss also the current state of the art for the field and the challenges that need to be addressed for human computer interaction research to make a larger impact in the domain of rehabilitation technology.
众所周知,人口老龄化和医疗保健成本上升的趋势推动了康复技术的发展。在这个领域有相当多的工作,包括制作严肃游戏、虚拟现实和机器人应用的努力。虽然多年来已经引入了创新技术,并且研究人员经常产生有希望的实验结果,但这些技术尚未带来预期的好处。当仔细观察中风康复的案例时,这种明显失败的原因是显而易见的,中风康复是发展康复技术的最重要的研究课题之一。本文认为,改进应以对患者需求的理解为中心,使患者、治疗师和护理人员能够根据患者的需求制定个性化的解决方案,实施有效的反馈和激励策略,并深入了解康复技术将嵌入的社会技术系统。这些都是人机交互(HCI)研究人员多年来一直在处理的经典挑战,这就是康复技术领域需要HCI研究人员大量投入的原因,这也解释了与康复相关的HCI出版物越来越多的原因。该演讲回顾了埃因霍温科技大学与合作机构开展的相关研究,这些研究考察了有形用户界面和康复中的具体化交互的价值,以及如何设计具有功能性目的的有趣交互或游戏。反馈设计。我将讨论Daniel Tetteroo博士研究[2,3,4]中我们为tagtrainer系统开发康复技术所做的工作,以及Wang Qi博士研究[5,6]中对可穿戴解决方案的探索。由于我们的研究是设计驱动的和探索性的,我还将讨论该领域的当前艺术状态以及人机交互研究需要解决的挑战,以便在康复技术领域产生更大的影响。
{"title":"Interaction Design for Rehabiliation","authors":"P. Markopoulos","doi":"10.1145/3025171.3026365","DOIUrl":"https://doi.org/10.1145/3025171.3026365","url":null,"abstract":"Well-known trends pertaining to the aging of population and the rising costs of healthcare motivate the development of rehabilitation technology. There is a considerable body of work in this area including efforts to make serious games, virtual reality and robotic applications. While innovative technologies have been introduced over the years, and often researchers produce promising experimental results, these technologies have not yet delivered the anticipated benefits. The causes for this apparent failure are evident when looking a closer look at the case of stroke rehabilitation, which is one of the heaviest researched topics for developing rehabilitation technologies. It is argued that improvements should be sought by centering the design on an understanding of patient needs, allowing patients, therapists and care givers in general to personalize solutions to the need of patients, effective feedback and motivation strategies to be implemented, and an in depth understanding of the socio-technical system in which the rehabilitation technology will be embedded. These are classic challenges that human computer interaction (HCI) researchers have been dealing with for years, which is why the field of rehabilitation technology requires considerable input from HCI researchers, and which explains the growing number of relevant HCI publications pertaining to rehabilitation. The talk reviews related research carried out at the Eindhoven University of Technology together with collaborating institutes, which has examined the value of tangible user interfaces and embodied interaction in rehabilitation, how designing playful interactions or games with a functional purpose., feedback design. I shall discuss the work we have done to develop rehabilitation technologies for the TagTrrainer system in the doctoral research of Daniel Tetteroo [2,3,4] and the explorations on wearable solutions in the doctoral research of Wang Qi.[5,6]. With our research being design driven and explorative, I will discuss also the current state of the art for the field and the challenges that need to be addressed for human computer interaction research to make a larger impact in the domain of rehabilitation technology.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"459 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128200788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CQAVis: Visual Text Analytics for Community Question Answering CQAVis:社区问答的可视化文本分析
Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025210
Enamul Hoque, Shafiq R. Joty, Luis Marquez, G. Carenini
Community question answering (CQA) forums can provide effective means for sharing information and addressing a user's information needs about particular topics. However, many such online forums are not moderated, resulting in many low quality and redundant comments, which makes it very challenging for users to find the appropriate answers to their questions. In this paper, we apply a user-centered design approach to develop a system, CQAVis, which supports users in identifying high quality comments and get their questions answered. Informed by the user's requirements, the system combines both text analytics and interactive visualization techniques together in a synergistic way. Given a new question posed by the user, the text analytic module automatically finds relevant answers by exploring existing related questions and the comments within their threads. Then the visualization module presents the search results to the user and supports the exploration of related comments. We have evaluated the system in the wild by deploying it within a CQA forum among thousands of real users. Through the online study, we gained deeper insights about the potential utility of the system, as well as learned generalizable lessons for designing visual text analytics systems for the domain of CQA forums.
社区问答(CQA)论坛可以为共享信息和满足用户关于特定主题的信息需求提供有效的手段。然而,许多这样的在线论坛没有得到管理,导致许多低质量和冗余的评论,这使得用户很难找到合适的问题答案。在本文中,我们采用以用户为中心的设计方法开发了一个系统,CQAVis,该系统支持用户识别高质量的评论并得到他们的问题的回答。根据用户的需求,该系统以一种协同的方式将文本分析和交互式可视化技术结合在一起。对于用户提出的新问题,文本分析模块通过探索现有的相关问题及其线程中的评论,自动找到相关的答案。然后可视化模块将搜索结果呈现给用户,并支持相关评论的浏览。我们通过在CQA论坛中部署数千名真实用户来评估该系统。通过在线学习,我们对系统的潜在效用有了更深入的了解,并学习了为CQA论坛领域设计可视化文本分析系统的可推广的经验教训。
{"title":"CQAVis: Visual Text Analytics for Community Question Answering","authors":"Enamul Hoque, Shafiq R. Joty, Luis Marquez, G. Carenini","doi":"10.1145/3025171.3025210","DOIUrl":"https://doi.org/10.1145/3025171.3025210","url":null,"abstract":"Community question answering (CQA) forums can provide effective means for sharing information and addressing a user's information needs about particular topics. However, many such online forums are not moderated, resulting in many low quality and redundant comments, which makes it very challenging for users to find the appropriate answers to their questions. In this paper, we apply a user-centered design approach to develop a system, CQAVis, which supports users in identifying high quality comments and get their questions answered. Informed by the user's requirements, the system combines both text analytics and interactive visualization techniques together in a synergistic way. Given a new question posed by the user, the text analytic module automatically finds relevant answers by exploring existing related questions and the comments within their threads. Then the visualization module presents the search results to the user and supports the exploration of related comments. We have evaluated the system in the wild by deploying it within a CQA forum among thousands of real users. Through the online study, we gained deeper insights about the potential utility of the system, as well as learned generalizable lessons for designing visual text analytics systems for the domain of CQA forums.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133153075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
UI X-Ray: Interactive Mobile UI Testing Based on Computer Vision UI X-Ray:基于计算机视觉的交互式移动UI测试
Pub Date : 2017-03-07 DOI: 10.1145/3025171.3025190
Chun-Fu Chen, Marco Pistoia, Conglei Shi, Paolo Girolami, Joe W. Ligman, Y. Wang
User Interface/eXperience (UI/UX) significantly affects the lifetime of any software program, particularly mobile apps. A bad UX can undermine the success of a mobile app even if that app enables sophisticated capabilities. A good UX, however, needs to be supported of a highly functional and user friendly UI design. In spite of the importance of building mobile apps based on solid UI designs, UI discrepancies---inconsistencies between UI design and implementation---are among the most numerous and expensive defects encountered during testing. This paper presents UI X-Ray, an interactive UI testing system that integrates computer-vision methods to facilitate the correction of UI discrepancies---such as inconsistent positions, sizes and colors of objects and fonts. Using UI X-Ray does not require any programming experience; therefore, UI X-Ray can be used even by non-programmers---particularly designers---which significantly reduces the overhead involved in writing tests. With the feature of interactive interface, UI testers can quickly generate defect reports and revision instructions---which would otherwise be done manually. We verified our UI X-Ray on 4 developed mobile apps of which the entire development history was saved. UI X-Ray achieved a 99.03% true-positive rate, which significantly surpassed the 20.92% true-positive rate obtained via manual analysis. Furthermore, evaluating the results of our automated analysis can be completed quickly (< 1 minute per view on average) compared to hours of manual work required by UI testers. On the other hand, UI X-Ray received the appreciations from skilled designers and UI X-Ray improves their current work flow to generate UI defect reports and revision instructions. The proposed system, UI X-Ray, presented in this paper has recently become part of a commercial product.
用户界面/体验(UI/UX)显著影响任何软件程序的生命周期,尤其是移动应用程序。糟糕的用户体验可能会破坏移动应用的成功,即使该应用具有复杂的功能。然而,一个好的用户体验需要一个功能强大且用户友好的UI设计来支持。尽管基于可靠的UI设计构建手机应用很重要,但UI差异(游戏邦注:即UI设计与执行之间的不一致)是测试过程中遇到的最多且最昂贵的缺陷之一。本文介绍了UI X-Ray,这是一个交互式UI测试系统,它集成了计算机视觉方法来促进UI差异的纠正,例如对象和字体的位置,大小和颜色不一致。使用UI - X-Ray不需要任何编程经验;因此,UI X-Ray甚至可以由非程序员使用——特别是设计人员——这大大减少了编写测试所涉及的开销。有了交互界面的特性,UI测试人员可以快速地生成缺陷报告和修订说明——否则这些都是手工完成的。我们在4个已开发的手机应用上验证了UI X-Ray,这些应用保存了整个开发历史。UI X-Ray的真阳性率为99.03%,明显超过人工分析的20.92%。此外,与UI测试人员需要数小时的手工工作相比,评估我们自动分析的结果可以快速完成(平均每个视图< 1分钟)。另一方面,UI - X-Ray得到了熟练设计师的赞赏,UI - X-Ray改进了他们当前的工作流程,以生成UI缺陷报告和修订说明。本文提出的UI X-Ray系统最近已成为商业产品的一部分。
{"title":"UI X-Ray: Interactive Mobile UI Testing Based on Computer Vision","authors":"Chun-Fu Chen, Marco Pistoia, Conglei Shi, Paolo Girolami, Joe W. Ligman, Y. Wang","doi":"10.1145/3025171.3025190","DOIUrl":"https://doi.org/10.1145/3025171.3025190","url":null,"abstract":"User Interface/eXperience (UI/UX) significantly affects the lifetime of any software program, particularly mobile apps. A bad UX can undermine the success of a mobile app even if that app enables sophisticated capabilities. A good UX, however, needs to be supported of a highly functional and user friendly UI design. In spite of the importance of building mobile apps based on solid UI designs, UI discrepancies---inconsistencies between UI design and implementation---are among the most numerous and expensive defects encountered during testing. This paper presents UI X-Ray, an interactive UI testing system that integrates computer-vision methods to facilitate the correction of UI discrepancies---such as inconsistent positions, sizes and colors of objects and fonts. Using UI X-Ray does not require any programming experience; therefore, UI X-Ray can be used even by non-programmers---particularly designers---which significantly reduces the overhead involved in writing tests. With the feature of interactive interface, UI testers can quickly generate defect reports and revision instructions---which would otherwise be done manually. We verified our UI X-Ray on 4 developed mobile apps of which the entire development history was saved. UI X-Ray achieved a 99.03% true-positive rate, which significantly surpassed the 20.92% true-positive rate obtained via manual analysis. Furthermore, evaluating the results of our automated analysis can be completed quickly (< 1 minute per view on average) compared to hours of manual work required by UI testers. On the other hand, UI X-Ray received the appreciations from skilled designers and UI X-Ray improves their current work flow to generate UI defect reports and revision instructions. The proposed system, UI X-Ray, presented in this paper has recently become part of a commercial product.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130554670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Modern Touchscreen Keyboards as Intelligent User Interfaces: A Research Review 现代触屏键盘作为智能用户界面:研究综述
Pub Date : 2017-03-07 DOI: 10.1145/3025171.3026367
Shumin Zhai
Essential to mobile communication, the touchscreen keyboard is the most ubiquitous intelligent user interface on modern mobile phones. Developing smarter, more efficient, easy to learn, and fun to use keyboards has presented many fascinating IUI research and design questions. Some have been addressed by academic research and practitioners in industry, while others remain significant ongoing research challenges. In this IUI 2017 keynote address I will review and synthesize the progress and open research questions of the past 15 years in text input, focusing on those my co-authors and I have directly dealt with through publications, such as the cost-benefit equations of automation and prediction [9], the power of machine/statistical intelligence [4, 7, 12], the human performance models fundamental to the design of error-correction algorithms [1, 2, 8], spatial scaling from a phone to a watch and the implications on human-machine labor division [5], user behavior and learning innovation [7, 11, 12, 13], and the challenges of evaluating the longitudinal effects of personalization and adaptation [4]. Through this research program review, I will illustrate why intelligent user interfaces, or the combination of machine intelligence and human factors, holds the future of human-computer interaction, and information technology at large.
对于移动通信来说,触屏键盘是必不可少的,它是现代手机上最普遍的智能用户界面。开发更智能、更高效、易学和有趣的键盘已经提出了许多有趣的IUI研究和设计问题。其中一些已经被学术研究和行业从业者所解决,而另一些仍然是重大的研究挑战。在本次IUI 2017主题演讲中,我将回顾和综合过去15年文本输入领域的进展和开放的研究问题,重点关注我和我的合著者通过出版物直接处理的问题,例如自动化和预测的成本效益方程[4,7,12],机器/统计智能的力量[4,7,12],纠错算法设计基础的人类表现模型[1,2,8],从手机到手表的空间尺度及其对人机分工bb1、用户行为和学习创新的影响[7,11,12,13],以及个性化和适应纵向效应评估的挑战bb0。通过这个研究项目的回顾,我将说明为什么智能用户界面,或者机器智能和人的因素的结合,拥有人机交互和信息技术的未来。
{"title":"Modern Touchscreen Keyboards as Intelligent User Interfaces: A Research Review","authors":"Shumin Zhai","doi":"10.1145/3025171.3026367","DOIUrl":"https://doi.org/10.1145/3025171.3026367","url":null,"abstract":"Essential to mobile communication, the touchscreen keyboard is the most ubiquitous intelligent user interface on modern mobile phones. Developing smarter, more efficient, easy to learn, and fun to use keyboards has presented many fascinating IUI research and design questions. Some have been addressed by academic research and practitioners in industry, while others remain significant ongoing research challenges. In this IUI 2017 keynote address I will review and synthesize the progress and open research questions of the past 15 years in text input, focusing on those my co-authors and I have directly dealt with through publications, such as the cost-benefit equations of automation and prediction [9], the power of machine/statistical intelligence [4, 7, 12], the human performance models fundamental to the design of error-correction algorithms [1, 2, 8], spatial scaling from a phone to a watch and the implications on human-machine labor division [5], user behavior and learning innovation [7, 11, 12, 13], and the challenges of evaluating the longitudinal effects of personalization and adaptation [4]. Through this research program review, I will illustrate why intelligent user interfaces, or the combination of machine intelligence and human factors, holds the future of human-computer interaction, and information technology at large.","PeriodicalId":166632,"journal":{"name":"Proceedings of the 22nd International Conference on Intelligent User Interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126501607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings of the 22nd International Conference on Intelligent User Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1