首页 > 最新文献

Proceedings of the 28th International Conference on Intelligent User Interfaces最新文献

英文 中文
ASAP: Endowing Adaptation Capability to Agent in Human-Agent Interaction ASAP:赋予Agent在人-Agent交互中的适应能力
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584081
Jieyeon Woo, C. Pelachaud, C. Achard
Socially Interactive Agents (SIAs) offer users with interactive face-to-face conversations. They can take the role of a speaker and communicate verbally and nonverbally their intentions and emotional states; but they should also act as active listener and be an interactive partner. In human-human interaction, interlocutors adapt their behaviors reciprocally and dynamically. The endowment of such adaptation capability can allow SIAs to show social and engaging behaviors. In this paper, we focus on modelizing the reciprocal adaptation to generate SIA behaviors for both conversational roles of speaker and listener. We propose the Augmented Self-Attention Pruning (ASAP) neural network model. ASAP incorporates recurrent neural network, attention mechanism of transformers, and pruning technique to learn the reciprocal adaptation via multimodal social signals. We evaluate our work objectively, via several metrics, and subjectively, through a user perception study where the SIA behaviors generated by ASAP is compared with those of other state-of-the-art models. Our results demonstrate that ASAP significantly outperforms the state-of-the-art models and thus shows the importance of reciprocal adaptation modeling.
社会交互代理(SIAs)为用户提供交互式的面对面对话。他们可以扮演说话者的角色,用语言和非语言表达他们的意图和情绪状态;但他们也应该作为一个积极的倾听者和互动的伙伴。在人与人之间的互动中,对话者相互地、动态地调整自己的行为。这种适应能力的禀赋可以使SIAs表现出社交和参与行为。在本文中,我们重点研究了相互适应的建模,以生成说话者和听者的会话角色的SIA行为。提出了一种增强自注意修剪(ASAP)神经网络模型。ASAP结合了递归神经网络、变压器注意机制和剪枝技术,通过多模态社会信号学习相互适应。我们通过几个指标客观地评估我们的工作,并主观地通过用户感知研究,将ASAP生成的SIA行为与其他最先进的模型进行比较。我们的研究结果表明,ASAP显著优于最先进的模型,从而显示了相互适应模型的重要性。
{"title":"ASAP: Endowing Adaptation Capability to Agent in Human-Agent Interaction","authors":"Jieyeon Woo, C. Pelachaud, C. Achard","doi":"10.1145/3581641.3584081","DOIUrl":"https://doi.org/10.1145/3581641.3584081","url":null,"abstract":"Socially Interactive Agents (SIAs) offer users with interactive face-to-face conversations. They can take the role of a speaker and communicate verbally and nonverbally their intentions and emotional states; but they should also act as active listener and be an interactive partner. In human-human interaction, interlocutors adapt their behaviors reciprocally and dynamically. The endowment of such adaptation capability can allow SIAs to show social and engaging behaviors. In this paper, we focus on modelizing the reciprocal adaptation to generate SIA behaviors for both conversational roles of speaker and listener. We propose the Augmented Self-Attention Pruning (ASAP) neural network model. ASAP incorporates recurrent neural network, attention mechanism of transformers, and pruning technique to learn the reciprocal adaptation via multimodal social signals. We evaluate our work objectively, via several metrics, and subjectively, through a user perception study where the SIA behaviors generated by ASAP is compared with those of other state-of-the-art models. Our results demonstrate that ASAP significantly outperforms the state-of-the-art models and thus shows the importance of reciprocal adaptation modeling.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123016218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Physiologically Attentive User Interface for Improved Robot Teleoperation 改进机器人远程操作的生理细心用户界面
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584084
António Tavares, J. L. Silva, R. Ventura
User interfaces (UI) are shifting from being attention-hungry to being attentive to users’ needs upon interaction. Interfaces developed for robot teleoperation can be particularly complex, often displaying large amounts of information, which can increase the cognitive overload that prejudices the performance of the operator. This paper presents the development of a Physiologically Attentive User Interface (PAUI) prototype preliminary evaluated with six participants. A case study on Urban Search and Rescue (USAR) operations that teleoperate a robot was used although the proposed approach aims to be generic. The robot considered provides an overly complex Graphical User Interface (GUI) which does not allow access to its source code. This represents a recurring and challenging scenario when robots are still in use, but technical updates are no longer offered that usually mean their abandon. A major contribution of the approach is the possibility of recycling old systems while improving the UI made available to end users and considering as input their physiological data. The proposed PAUI analyses physiological data, facial expressions, and eye movements to classify three mental states (rest, workload, and stress). An Attentive User Interface (AUI) is then assembled by recycling a pre-existing GUI, which is dynamically modified according to the predicted mental state to improve the user's focus during mentally demanding situations. In addition to the novelty of the proposed PAUIs that take advantage of pre-existing GUIs, this work also contributes with the design of a user experiment comprising mental state induction tasks that successfully trigger high and low cognitive overload states. Results from the preliminary user evaluation revealed a tendency for improvement in the usefulness and ease of usage of the PAUI, although without statistical significance, due to the reduced number of subjects.
用户界面(UI)正在从关注饥渴转向关注用户在交互时的需求。为机器人远程操作开发的界面可能特别复杂,经常显示大量信息,这可能会增加认知过载,从而影响操作员的表现。本文介绍了一个生理关注用户界面(PAUI)原型的开发,并对六个参与者进行了初步评估。尽管提出的方法旨在通用,但仍使用了城市搜索和救援(USAR)操作远程操作机器人的案例研究。所考虑的机器人提供了一个过于复杂的图形用户界面(GUI),不允许访问其源代码。当机器人仍在使用,但不再提供通常意味着放弃它们的技术更新时,这代表了一个反复出现且具有挑战性的场景。该方法的一个主要贡献是有可能回收旧系统,同时改进提供给最终用户的用户界面,并考虑将其生理数据作为输入。提出的PAUI分析生理数据、面部表情和眼球运动来分类三种精神状态(休息、工作和压力)。然后,通过回收已有的GUI来组装一个细心的用户界面(AUI),该GUI根据预测的精神状态进行动态修改,以提高用户在精神要求高的情况下的注意力。除了所提出的利用已有gui的paui的新新性之外,这项工作还有助于设计一个包含成功触发高和低认知过载状态的心理状态诱导任务的用户实验。初步用户评价的结果显示,由于受试者数量的减少,PAUI的有用性和易用性有改善的趋势,尽管没有统计学意义。
{"title":"Physiologically Attentive User Interface for Improved Robot Teleoperation","authors":"António Tavares, J. L. Silva, R. Ventura","doi":"10.1145/3581641.3584084","DOIUrl":"https://doi.org/10.1145/3581641.3584084","url":null,"abstract":"User interfaces (UI) are shifting from being attention-hungry to being attentive to users’ needs upon interaction. Interfaces developed for robot teleoperation can be particularly complex, often displaying large amounts of information, which can increase the cognitive overload that prejudices the performance of the operator. This paper presents the development of a Physiologically Attentive User Interface (PAUI) prototype preliminary evaluated with six participants. A case study on Urban Search and Rescue (USAR) operations that teleoperate a robot was used although the proposed approach aims to be generic. The robot considered provides an overly complex Graphical User Interface (GUI) which does not allow access to its source code. This represents a recurring and challenging scenario when robots are still in use, but technical updates are no longer offered that usually mean their abandon. A major contribution of the approach is the possibility of recycling old systems while improving the UI made available to end users and considering as input their physiological data. The proposed PAUI analyses physiological data, facial expressions, and eye movements to classify three mental states (rest, workload, and stress). An Attentive User Interface (AUI) is then assembled by recycling a pre-existing GUI, which is dynamically modified according to the predicted mental state to improve the user's focus during mentally demanding situations. In addition to the novelty of the proposed PAUIs that take advantage of pre-existing GUIs, this work also contributes with the design of a user experiment comprising mental state induction tasks that successfully trigger high and low cognitive overload states. Results from the preliminary user evaluation revealed a tendency for improvement in the usefulness and ease of usage of the PAUI, although without statistical significance, due to the reduced number of subjects.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127587877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SoundToons: Exemplar-Based Authoring of Interactive Audio-Driven Animation Sprites SoundToons:基于范例的交互式音频驱动动画精灵创作
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584047
T. Chong, Hijung Valentina Shin, Deepali Aneja, T. Igarashi
Animations can come to life when they are synchronized with relevant sounds. Yet, synchronizing animations to audio requires tedious key-framing or programming, which is difficult for novice creators. There are existing tools that support audio-driven live animation, but they focus primarily on speech and have little or no support for non-speech sounds. We present SoundToons, an exemplar-based authoring tool for interactive, audio-driven animation focusing on non-speech sounds. Our tool enables novice creators to author live animations to a wide variety of non-speech sounds, such as clapping and instrumental music. We support two types of audio interactions: (1) discrete interaction, which triggers animations when a discrete sound event is detected, and (2) continuous, which synchronizes an animation to continuous audio parameters. By employing an exemplar-based iterative authoring approach, we empower novice creators to design and quickly refine interactive animations. User evaluations demonstrate that novice users can author and perform live audio-driven animation intuitively. Moreover, compared to other input modalities such as trackpads or foot pedals, users preferred using audio as an intuitive way to drive animation.
当动画与相关声音同步时,它们就会变得栩栩如生。然而,将动画与音频同步需要繁琐的关键帧或编程,这对新手创作者来说很困难。现有的工具支持音频驱动的现场动画,但它们主要关注语音,很少或根本不支持非语音声音。我们介绍SoundToons,一个基于范例的创作工具,用于交互式,音频驱动的动画,专注于非语音声音。我们的工具使新手创作者能够为各种各样的非语音声音(如拍手声和器乐)创作现场动画。我们支持两种类型的音频交互:(1)离散交互,当检测到离散声音事件时触发动画;(2)连续交互,将动画与连续音频参数同步。通过采用基于范例的迭代创作方法,我们使新手创作者能够设计和快速完善交互式动画。用户评价表明,新手用户可以直观地创作和执行现场音频驱动的动画。此外,与触控板或脚踏板等其他输入方式相比,用户更喜欢使用音频作为驱动动画的直观方式。
{"title":"SoundToons: Exemplar-Based Authoring of Interactive Audio-Driven Animation Sprites","authors":"T. Chong, Hijung Valentina Shin, Deepali Aneja, T. Igarashi","doi":"10.1145/3581641.3584047","DOIUrl":"https://doi.org/10.1145/3581641.3584047","url":null,"abstract":"Animations can come to life when they are synchronized with relevant sounds. Yet, synchronizing animations to audio requires tedious key-framing or programming, which is difficult for novice creators. There are existing tools that support audio-driven live animation, but they focus primarily on speech and have little or no support for non-speech sounds. We present SoundToons, an exemplar-based authoring tool for interactive, audio-driven animation focusing on non-speech sounds. Our tool enables novice creators to author live animations to a wide variety of non-speech sounds, such as clapping and instrumental music. We support two types of audio interactions: (1) discrete interaction, which triggers animations when a discrete sound event is detected, and (2) continuous, which synchronizes an animation to continuous audio parameters. By employing an exemplar-based iterative authoring approach, we empower novice creators to design and quickly refine interactive animations. User evaluations demonstrate that novice users can author and perform live audio-driven animation intuitively. Moreover, compared to other input modalities such as trackpads or foot pedals, users preferred using audio as an intuitive way to drive animation.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128358376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Supporting Requesters in Writing Clear Crowdsourcing Task Descriptions Through Computational Flaw Assessment 通过计算缺陷评估支持请求者撰写清晰的众包任务描述
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584039
Zahra Nouri, N. Prakash, U. Gadiraju, Henning Wachsmuth
Quality control is an, if not the, essential challenge in crowdsourcing. Unsatisfactory responses from crowd workers have been found to particularly result from ambiguous and incomplete task descriptions, often from inexperienced task requesters. However, creating clear task descriptions with sufficient information is a complex process for requesters in crowdsourcing marketplaces. In this paper, we investigate the extent to which requesters can be supported effectively in this process through computational techniques. To this end, we developed a tool that enables requesters to iteratively identify and correct eight common clarity flaws in their task descriptions before deployment on the platform. The tool can be used to write task descriptions from scratch or to assess and improve the clarity of prepared descriptions. It employs machine learning-based natural language processing models trained on real-world task descriptions that score a given task description for the eight clarity flaws. On this basis, the requester can iteratively revise and reassess the task description until it reaches a sufficient level of clarity. In a first user study, we let requesters create task descriptions using the tool and rate the tool’s different aspects of helpfulness thereafter. We then carried out a second user study with crowd workers, as those who are confronted with such descriptions in practice, to rate the clarity of the created task descriptions. According to our results, 65% of the requesters classified the helpfulness of the information provided by the tool high or very high (only 12% as low or very low). The requesters saw some room for improvement though, for example, concerning the display of bad examples. Nevertheless, 76% of the crowd workers believe that the overall clarity of the task descriptions created by the requesters using the tool improves over the initial version. In line with this, the automatically-computed clarity scores of the edited task descriptions were generally higher than those of the initial descriptions, indicating that the tool reliably predicts the clarity of task descriptions in overall terms.
在众包中,质量控制即使不是最根本的挑战,也是最基本的挑战。人们发现,群体工作者的不满意反应主要来自模棱两可和不完整的任务描述,通常来自没有经验的任务请求者。然而,对于众包市场中的请求者来说,创建带有足够信息的清晰任务描述是一个复杂的过程。在本文中,我们通过计算技术研究了在此过程中请求者可以得到有效支持的程度。为此,我们开发了一个工具,使请求者能够在平台上部署之前迭代地识别和纠正任务描述中的八个常见清晰度缺陷。该工具可用于从头开始编写任务描述,或评估和改进准备好的描述的清晰度。它采用了基于机器学习的自然语言处理模型,这些模型经过了真实世界任务描述的训练,可以对给定任务描述的八个清晰度缺陷进行评分。在此基础上,请求者可以迭代地修改和重新评估任务描述,直到它达到足够的清晰度。在第一个用户研究中,我们让请求者使用该工具创建任务描述,然后对该工具的不同方面的有用性进行评级。然后,我们对人群工作者进行了第二次用户研究,作为那些在实践中面临此类描述的人,来评估创建的任务描述的清晰度。根据我们的结果,65%的请求者将工具提供的信息的有用性分类为高或非常高(只有12%的人认为低或非常低)。然而,请求者看到了一些改进的空间,例如,关于坏例子的显示。尽管如此,76%的众工认为,请求者使用该工具创建的任务描述的总体清晰度比初始版本有所提高。与此相一致的是,编辑后的任务描述的自动计算的清晰度得分通常高于初始描述的清晰度,这表明该工具在总体上可靠地预测了任务描述的清晰度。
{"title":"Supporting Requesters in Writing Clear Crowdsourcing Task Descriptions Through Computational Flaw Assessment","authors":"Zahra Nouri, N. Prakash, U. Gadiraju, Henning Wachsmuth","doi":"10.1145/3581641.3584039","DOIUrl":"https://doi.org/10.1145/3581641.3584039","url":null,"abstract":"Quality control is an, if not the, essential challenge in crowdsourcing. Unsatisfactory responses from crowd workers have been found to particularly result from ambiguous and incomplete task descriptions, often from inexperienced task requesters. However, creating clear task descriptions with sufficient information is a complex process for requesters in crowdsourcing marketplaces. In this paper, we investigate the extent to which requesters can be supported effectively in this process through computational techniques. To this end, we developed a tool that enables requesters to iteratively identify and correct eight common clarity flaws in their task descriptions before deployment on the platform. The tool can be used to write task descriptions from scratch or to assess and improve the clarity of prepared descriptions. It employs machine learning-based natural language processing models trained on real-world task descriptions that score a given task description for the eight clarity flaws. On this basis, the requester can iteratively revise and reassess the task description until it reaches a sufficient level of clarity. In a first user study, we let requesters create task descriptions using the tool and rate the tool’s different aspects of helpfulness thereafter. We then carried out a second user study with crowd workers, as those who are confronted with such descriptions in practice, to rate the clarity of the created task descriptions. According to our results, 65% of the requesters classified the helpfulness of the information provided by the tool high or very high (only 12% as low or very low). The requesters saw some room for improvement though, for example, concerning the display of bad examples. Nevertheless, 76% of the crowd workers believe that the overall clarity of the task descriptions created by the requesters using the tool improves over the initial version. In line with this, the automatically-computed clarity scores of the edited task descriptions were generally higher than those of the initial descriptions, indicating that the tool reliably predicts the clarity of task descriptions in overall terms.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127089503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Descriptive Quality of AI-Generated Audio Using Image-Schemas 使用图像图式评估人工智能生成音频的描述质量
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584083
Purnima Kamath, Zhuoyao Li, Chitralekha Gupta, Kokil Jaidka, Suranga Nanayakkara, L. Wyse
Novel AI-generated audio samples are evaluated for descriptive qualities such as the smoothness of a morph using crowdsourced human listening tests. However, the methods to design interfaces for such experiments and to effectively articulate the descriptive audio quality under test receive very little attention in the evaluation metrics literature. In this paper, we explore the use of visual metaphors of image-schema to design interfaces to evaluate AI-generated audio. Furthermore, we highlight the importance of framing and contextualizing a descriptive audio quality under measurement using such constructs. Using both pitched sounds and textures, we conduct two sets of experiments to investigate how the quality of responses vary with audio and task complexities. Our results show that, in both cases, by using image-schemas we can improve the quality and consensus of AI-generated audio evaluations. Our findings reinforce the importance of interface design for listening tests and stationary visual constructs to communicate temporal qualities of AI-generated audio samples, especially to naïve listeners on crowdsourced platforms.
使用众包的人类听力测试来评估新颖的人工智能生成的音频样本的描述性质量,例如变形的平滑度。然而,为这样的实验设计界面的方法,以及有效地表达测试中描述性音频质量的方法,在评估指标文献中很少受到关注。在本文中,我们探索了使用图像图式的视觉隐喻来设计界面以评估人工智能生成的音频。此外,我们强调了使用这些结构在测量下构建和语境化描述性音频质量的重要性。使用音调和纹理,我们进行了两组实验来研究响应质量如何随音频和任务复杂性而变化。我们的研究结果表明,在这两种情况下,通过使用图像模式,我们可以提高人工智能生成的音频评估的质量和共识。我们的研究结果强调了听力测试和静态视觉结构的界面设计的重要性,以传达人工智能生成的音频样本的时间质量,特别是向众包平台上的naïve听众。
{"title":"Evaluating Descriptive Quality of AI-Generated Audio Using Image-Schemas","authors":"Purnima Kamath, Zhuoyao Li, Chitralekha Gupta, Kokil Jaidka, Suranga Nanayakkara, L. Wyse","doi":"10.1145/3581641.3584083","DOIUrl":"https://doi.org/10.1145/3581641.3584083","url":null,"abstract":"Novel AI-generated audio samples are evaluated for descriptive qualities such as the smoothness of a morph using crowdsourced human listening tests. However, the methods to design interfaces for such experiments and to effectively articulate the descriptive audio quality under test receive very little attention in the evaluation metrics literature. In this paper, we explore the use of visual metaphors of image-schema to design interfaces to evaluate AI-generated audio. Furthermore, we highlight the importance of framing and contextualizing a descriptive audio quality under measurement using such constructs. Using both pitched sounds and textures, we conduct two sets of experiments to investigate how the quality of responses vary with audio and task complexities. Our results show that, in both cases, by using image-schemas we can improve the quality and consensus of AI-generated audio evaluations. Our findings reinforce the importance of interface design for listening tests and stationary visual constructs to communicate temporal qualities of AI-generated audio samples, especially to naïve listeners on crowdsourced platforms.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133536815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Taming Entangled Accessibility Forum Threads for Efficient Screen Reading 驯服纠缠无障碍论坛线程有效的屏幕阅读
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584073
Anand Ravi Aiyer, I. Ramakrishnan, V. Ashok
Accessibility forums enable individuals with visual impairments to connect and collaboratively seek solutions to technical issues, as well as share reviews, best practices, and latest news. However, these forums are presently built on legacy systems that were primarily designed for sighted users, and are difficult to navigate with non-visual assistive technologies like screen-readers. Accessibility forum threads are “entangled”, with multiple sub-conversations interleaved with each other. This does not gel with the predominantly linear navigation of screen-readers. Screen-reader users often listen to reams of irrelevant posts while foraging for nuggets of interest. To address this and improve non-visual interaction efficiency, we present TASER, a browser extension that leverages a state-of-the-art conversation disentanglement algorithm to automatically identify and separate sub-conversations in a forum thread, and then presents these sub-conversations to the user via a custom interface specifically tailored for efficient and usable screen-reader interaction. In a user study with 11 screen-reader users, we observed that TASER significantly reduced the average user input actions and interaction times by and respectively along with a significant drop in cognitive load ( lower NASA-TLX score) compared to the status quo while performing representative information foraging tasks on accessibility forums.
可访问性论坛使有视觉障碍的个人能够相互联系并协作寻求技术问题的解决方案,以及分享审查、最佳实践和最新消息。然而,这些论坛目前建立在主要为视力正常的用户设计的遗留系统上,并且很难使用屏幕阅读器等非视觉辅助技术进行导航。无障碍论坛的话题是“纠缠”的,多个子对话相互交织。这与屏幕阅读器的主要线性导航不一致。屏幕阅读器的用户经常一边听大量无关的帖子,一边寻找自己感兴趣的内容。为了解决这个问题并提高非视觉交互效率,我们提出了TASER,这是一个浏览器扩展,它利用最先进的对话解纠缠算法来自动识别和分离论坛线程中的子对话,然后通过专门为高效和可用的屏幕阅读器交互量身定制的界面将这些子对话呈现给用户。在一项针对11名屏幕阅读器用户的用户研究中,我们观察到,与在可访问性论坛上执行代表性信息搜索任务相比,TASER显著减少了用户的平均输入动作和交互时间,同时显著降低了认知负荷(降低了NASA-TLX得分)。
{"title":"Taming Entangled Accessibility Forum Threads for Efficient Screen Reading","authors":"Anand Ravi Aiyer, I. Ramakrishnan, V. Ashok","doi":"10.1145/3581641.3584073","DOIUrl":"https://doi.org/10.1145/3581641.3584073","url":null,"abstract":"Accessibility forums enable individuals with visual impairments to connect and collaboratively seek solutions to technical issues, as well as share reviews, best practices, and latest news. However, these forums are presently built on legacy systems that were primarily designed for sighted users, and are difficult to navigate with non-visual assistive technologies like screen-readers. Accessibility forum threads are “entangled”, with multiple sub-conversations interleaved with each other. This does not gel with the predominantly linear navigation of screen-readers. Screen-reader users often listen to reams of irrelevant posts while foraging for nuggets of interest. To address this and improve non-visual interaction efficiency, we present TASER, a browser extension that leverages a state-of-the-art conversation disentanglement algorithm to automatically identify and separate sub-conversations in a forum thread, and then presents these sub-conversations to the user via a custom interface specifically tailored for efficient and usable screen-reader interaction. In a user study with 11 screen-reader users, we observed that TASER significantly reduced the average user input actions and interaction times by and respectively along with a significant drop in cognitive load ( lower NASA-TLX score) compared to the status quo while performing representative information foraging tasks on accessibility forums.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128100003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Investigation into an Always Listening Interface to Support Data Exploration 支持数据探索的始终侦听接口的研究
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584079
Roderick S. Tabalba, Nurit Kirshenbaum, J. Leigh, Abari Bhattacharya, Veronica Grosso, Barbara Di Eugenio, Andrew E. Johnson, Moira Zellner
Natural Language Interfaces that facilitate data exploration tasks are rapidly gaining in interest in the research community because they enable users to focus their attention on the task of inquiry rather than the mechanics of chart construction. Yet, current systems rely solely on processing the user’s explicit commands to generate the user’s intended chart. These commands can be ambiguous due to natural language tendencies such as speech disfluency and underspecification. In this paper, we developed and studied how an always listening interface can help contextualize imprecise queries. Our study revealed that an always listening interface is able to use an on-going conversation to fill in missing properties for imprecise commands, disambiguate inaccurate commands without asking the user for clarification, as well as generate charts without being explicitly asked.
促进数据探索任务的自然语言接口正迅速引起研究界的兴趣,因为它们使用户能够将注意力集中在查询任务上,而不是图表构造的机制。然而,目前的系统完全依赖于处理用户的显式命令来生成用户想要的图表。这些命令可能由于自然语言倾向(如语音不流畅和不规范)而具有歧义性。在本文中,我们开发并研究了一个始终侦听的接口如何帮助将不精确的查询上下文化。我们的研究表明,一个始终倾听的界面能够使用正在进行的对话来填补不精确命令的缺失属性,消除不准确命令的歧义,而无需要求用户澄清,以及在没有明确要求的情况下生成图表。
{"title":"An Investigation into an Always Listening Interface to Support Data Exploration","authors":"Roderick S. Tabalba, Nurit Kirshenbaum, J. Leigh, Abari Bhattacharya, Veronica Grosso, Barbara Di Eugenio, Andrew E. Johnson, Moira Zellner","doi":"10.1145/3581641.3584079","DOIUrl":"https://doi.org/10.1145/3581641.3584079","url":null,"abstract":"Natural Language Interfaces that facilitate data exploration tasks are rapidly gaining in interest in the research community because they enable users to focus their attention on the task of inquiry rather than the mechanics of chart construction. Yet, current systems rely solely on processing the user’s explicit commands to generate the user’s intended chart. These commands can be ambiguous due to natural language tendencies such as speech disfluency and underspecification. In this paper, we developed and studied how an always listening interface can help contextualize imprecise queries. Our study revealed that an always listening interface is able to use an on-going conversation to fill in missing properties for imprecise commands, disambiguate inaccurate commands without asking the user for clarification, as well as generate charts without being explicitly asked.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134298860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
AlphaDAPR: An AI-based Explainable Expert Support System for Art Therapy AlphaDAPR:基于人工智能的艺术治疗可解释专家支持系统
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584087
Jiwon Kim, Jiwon Kang, Taeeun Kim, Hayeon Song, Jinyoung Han
Sketch-based drawing assessments in art therapy are widely used to understand individuals’ cognitive and psychological states, such as cognitive impairment or mental disorders. Along with self-report measures based on a questionnaire, psychological drawing assessments can augment information about an individual psychological state. However, the interpretation of the drawing assessments requires much time and effort, especially in a large-scale group such as schools or companies, and depends on the experience of the art therapists. To address this issue, we propose an AI-based expert support system, AlphaDAPR, to support art therapists and psychologists in conducting a large-scale automatic drawing assessment. Our survey results with 64 art therapists showed that 64.06% of the participants indicated a willingness to use the proposed system. The results of structural equation modeling highlighted the importance of explainable AI embedded in the interface design to affect perceived usefulness, trust, satisfaction, and intention to use eventually. The interview results revealed that most of the art therapists show high levels of intention to use the proposed system while expressing some concerns about AI’s possible limitations and threats as well. Discussion and implications are provided, stressing the importance of clear communication about the collaborative role of AI and users.
艺术治疗中基于素描的绘画评估被广泛用于了解个体的认知和心理状态,如认知障碍或精神障碍。与基于问卷的自我报告测量一起,心理绘画评估可以增加关于个体心理状态的信息。然而,对绘画评估的解释需要花费大量的时间和精力,特别是在学校或公司这样的大型团体中,并且取决于艺术治疗师的经验。为了解决这个问题,我们提出了一个基于人工智能的专家支持系统,AlphaDAPR,以支持艺术治疗师和心理学家进行大规模的自动绘画评估。我们对64位艺术治疗师的调查结果显示,64.06%的参与者表示愿意使用拟议的系统。结构方程建模的结果强调了界面设计中可解释的人工智能对感知有用性、信任、满意度和最终使用意图的重要性。采访结果显示,大多数艺术治疗师都表现出高度的使用该系统的意愿,同时也表达了对人工智能可能存在的局限性和威胁的一些担忧。本文提供了讨论和启示,强调了就人工智能和用户的协作角色进行清晰沟通的重要性。
{"title":"AlphaDAPR: An AI-based Explainable Expert Support System for Art Therapy","authors":"Jiwon Kim, Jiwon Kang, Taeeun Kim, Hayeon Song, Jinyoung Han","doi":"10.1145/3581641.3584087","DOIUrl":"https://doi.org/10.1145/3581641.3584087","url":null,"abstract":"Sketch-based drawing assessments in art therapy are widely used to understand individuals’ cognitive and psychological states, such as cognitive impairment or mental disorders. Along with self-report measures based on a questionnaire, psychological drawing assessments can augment information about an individual psychological state. However, the interpretation of the drawing assessments requires much time and effort, especially in a large-scale group such as schools or companies, and depends on the experience of the art therapists. To address this issue, we propose an AI-based expert support system, AlphaDAPR, to support art therapists and psychologists in conducting a large-scale automatic drawing assessment. Our survey results with 64 art therapists showed that 64.06% of the participants indicated a willingness to use the proposed system. The results of structural equation modeling highlighted the importance of explainable AI embedded in the interface design to affect perceived usefulness, trust, satisfaction, and intention to use eventually. The interview results revealed that most of the art therapists show high levels of intention to use the proposed system while expressing some concerns about AI’s possible limitations and threats as well. Discussion and implications are provided, stressing the importance of clear communication about the collaborative role of AI and users.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128928761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
D-Touch: Recognizing and Predicting Fine-grained Hand-face Touching Activities Using a Neck-mounted Wearable D-Touch:使用颈部可穿戴设备识别和预测细粒度手-脸触摸活动
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584063
Hyunchul Lim, Ruidong Zhang, Samhita Pendyal, J. Jo, Cheng Zhang
This paper presents D-Touch, a neck-mounted wearable sensing system that can recognize and predict how a hand touches the face. It uses a neck-mounted infrared camera (IR), which takes pictures of the head from the neck. These IR camera images are processed and used to train a deep-learning model to recognize and predict touch time and positions. The study showed D-Touch distinguished 17 Facial related Activity (FrA), including 11 face touch positions and 6 other activities, with over 92.1% accuracy and predict the hand-touching T-zone from other FrA activities with an accuracy of 82.12% within 150 ms after the hand appeared in the camera. A study with 10 participants conducted in their homes without any constraints on participants showed that D-Touch can predict the hand-touching T-zone from other FrA activities with an accuracy of 72.3% within 150 ms after the camera saw the hand. Based on the study results, we further discuss the opportunities and challenges of deploying D-Touch in real-world scenarios.
本文介绍了D-Touch,一种颈式可穿戴传感系统,可以识别和预测手如何触摸脸部。它使用脖子上的红外摄像头(IR),可以从脖子上拍摄头部的照片。这些红外相机图像被处理并用于训练一个深度学习模型来识别和预测触摸时间和位置。研究表明,D-Touch识别出17个面部相关活动(FrA),包括11个面部触摸位置和6个其他活动,准确率超过92.1%,在手出现在相机后150 ms内,D-Touch从其他FrA活动中预测手触摸t区,准确率为82.12%。一项对10名参与者在家中进行的研究表明,在摄像机看到手后150毫秒内,D-Touch可以从其他FrA活动中预测手触摸t区,准确率为72.3%。基于研究结果,我们进一步讨论了在现实场景中部署D-Touch的机遇和挑战。
{"title":"D-Touch: Recognizing and Predicting Fine-grained Hand-face Touching Activities Using a Neck-mounted Wearable","authors":"Hyunchul Lim, Ruidong Zhang, Samhita Pendyal, J. Jo, Cheng Zhang","doi":"10.1145/3581641.3584063","DOIUrl":"https://doi.org/10.1145/3581641.3584063","url":null,"abstract":"This paper presents D-Touch, a neck-mounted wearable sensing system that can recognize and predict how a hand touches the face. It uses a neck-mounted infrared camera (IR), which takes pictures of the head from the neck. These IR camera images are processed and used to train a deep-learning model to recognize and predict touch time and positions. The study showed D-Touch distinguished 17 Facial related Activity (FrA), including 11 face touch positions and 6 other activities, with over 92.1% accuracy and predict the hand-touching T-zone from other FrA activities with an accuracy of 82.12% within 150 ms after the hand appeared in the camera. A study with 10 participants conducted in their homes without any constraints on participants showed that D-Touch can predict the hand-touching T-zone from other FrA activities with an accuracy of 72.3% within 150 ms after the camera saw the hand. Based on the study results, we further discuss the opportunities and challenges of deploying D-Touch in real-world scenarios.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125349698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resilience Through Appropriation: Pilots’ View on Complex Decision Support 适应性通过拨款:飞行员对复杂决策支持的看法
Pub Date : 2023-03-27 DOI: 10.1145/3581641.3584056
Z. Zhang, Cara Storath, Yuanting Liu, A. Butz
Intelligent decision support tools (DSTs) hold the promise to improve the quality of human decision-making in challenging situations like diversions in aviation. To achieve these improvements, a common goal in DST design is to calibrate decision makers’ trust in the system. However, this perspective is mostly informed by controlled studies and might not fully reflect the real-world complexity of diversions. In order to understand how DSTs can be beneficial in the view of those who have the best understanding of the complexity of diversions, we interviewed professional pilots. To facilitate discussions, we built two low-fidelity prototypes, each representing a different role a DST could assume: (a) actively suggesting and ranking airports based on pilot-specified criteria, and (b) unobtrusively hinting at data points the pilot should be aware of. We find that while pilots would not blindly trust a DST, they at the same time reject deliberate trust calibration in the moment of the decision. We revisit appropriation as a lens to understand this seeming contradiction as well as a range of means to enable appropriation. Aside from the commonly considered need for transparency, these include directability and continuous support throughout the entire decision process. Based on our design exploration, we encourage to expand the view on DST design beyond trust calibration at the point of the actual decision.
智能决策支持工具(DSTs)有望在航空改道等具有挑战性的情况下提高人类决策的质量。为了实现这些改进,DST设计中的一个共同目标是校准决策者对系统的信任。然而,这种观点主要是由对照研究提供的,可能不能完全反映现实世界中转移的复杂性。为了了解DSTs如何在那些最了解改道复杂性的人看来是有益的,我们采访了专业飞行员。为了便于讨论,我们构建了两个低保真原型,每个原型代表了DST可以承担的不同角色:(a)根据飞行员指定的标准积极建议和排名机场,以及(b)不显眼地暗示飞行员应该知道的数据点。我们发现,虽然飞行员不会盲目信任DST,但他们同时拒绝在决策时刻进行故意的信任校准。我们重新审视挪用作为一个镜头来理解这个看似矛盾,以及一系列的手段,使挪用。除了通常考虑的透明度需求外,这些需求还包括贯穿整个决策过程的可指导性和持续支持。基于我们的设计探索,我们鼓励在实际决策点上扩展对DST设计的看法,而不是信任校准。
{"title":"Resilience Through Appropriation: Pilots’ View on Complex Decision Support","authors":"Z. Zhang, Cara Storath, Yuanting Liu, A. Butz","doi":"10.1145/3581641.3584056","DOIUrl":"https://doi.org/10.1145/3581641.3584056","url":null,"abstract":"Intelligent decision support tools (DSTs) hold the promise to improve the quality of human decision-making in challenging situations like diversions in aviation. To achieve these improvements, a common goal in DST design is to calibrate decision makers’ trust in the system. However, this perspective is mostly informed by controlled studies and might not fully reflect the real-world complexity of diversions. In order to understand how DSTs can be beneficial in the view of those who have the best understanding of the complexity of diversions, we interviewed professional pilots. To facilitate discussions, we built two low-fidelity prototypes, each representing a different role a DST could assume: (a) actively suggesting and ranking airports based on pilot-specified criteria, and (b) unobtrusively hinting at data points the pilot should be aware of. We find that while pilots would not blindly trust a DST, they at the same time reject deliberate trust calibration in the moment of the decision. We revisit appropriation as a lens to understand this seeming contradiction as well as a range of means to enable appropriation. Aside from the commonly considered need for transparency, these include directability and continuous support throughout the entire decision process. Based on our design exploration, we encourage to expand the view on DST design beyond trust calibration at the point of the actual decision.","PeriodicalId":118159,"journal":{"name":"Proceedings of the 28th International Conference on Intelligent User Interfaces","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126693589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 28th International Conference on Intelligent User Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1