首页 > 最新文献

IUI. International Conference on Intelligent User Interfaces最新文献

英文 中文
Optimizing temporal topic segmentation for intelligent text visualization 面向智能文本可视化的时间主题分割优化
Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449441
Shimei Pan, Michelle X. Zhou, Yangqiu Song, Weihong Qian, Fei Wang, Shixia Liu
We are building a topic-based, interactive visual analytic tool that aids users in analyzing large collections of text. To help users quickly discover content evolution and significant content transitions within a topic over time, here we present a novel, constraint-based approach to temporal topic segmentation. Our solution splits a discovered topic into multiple linear, non-overlapping sub-topics along a timeline by satisfying a diverse set of semantic, temporal, and visualization constraints simultaneously. For each derived sub-topic, our solution also automatically selects a set of representative keywords to summarize the main content of the sub-topic. Our extensive evaluation, including a crowd-sourced user study, demonstrates the effectiveness of our method over an existing baseline.
我们正在构建一个基于主题的交互式可视化分析工具,帮助用户分析大量文本。为了帮助用户快速发现内容演变和主题内重要的内容转换,我们提出了一种新颖的、基于约束的时间主题分割方法。我们的解决方案通过同时满足不同的语义、时间和可视化约束,将发现的主题沿着时间轴拆分为多个线性的、不重叠的子主题。对于每个衍生的子主题,我们的解决方案还会自动选择一组具有代表性的关键字来总结子主题的主要内容。我们的广泛评估,包括一个众包用户研究,证明了我们的方法在现有基线上的有效性。
{"title":"Optimizing temporal topic segmentation for intelligent text visualization","authors":"Shimei Pan, Michelle X. Zhou, Yangqiu Song, Weihong Qian, Fei Wang, Shixia Liu","doi":"10.1145/2449396.2449441","DOIUrl":"https://doi.org/10.1145/2449396.2449441","url":null,"abstract":"We are building a topic-based, interactive visual analytic tool that aids users in analyzing large collections of text. To help users quickly discover content evolution and significant content transitions within a topic over time, here we present a novel, constraint-based approach to temporal topic segmentation. Our solution splits a discovered topic into multiple linear, non-overlapping sub-topics along a timeline by satisfying a diverse set of semantic, temporal, and visualization constraints simultaneously. For each derived sub-topic, our solution also automatically selects a set of representative keywords to summarize the main content of the sub-topic. Our extensive evaluation, including a crowd-sourced user study, demonstrates the effectiveness of our method over an existing baseline.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"77 1","pages":"339-350"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76973054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Directing exploratory search: reinforcement learning from user interactions with keywords 指导探索性搜索:从用户与关键字的交互中强化学习
Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449413
D. Glowacka, Tuukka Ruotsalo, Ksenia Konyushkova, Kumaripaba Athukorala, Samuel Kaski, Giulio Jacucci
Techniques for both exploratory and known item search tend to direct only to more specific subtopics or individual documents, as opposed to allowing directing the exploration of the information space. We present an interactive information retrieval system that combines Reinforcement Learning techniques along with a novel user interface design to allow active engagement of users in directing the search. Users can directly manipulate document features (keywords) to indicate their interests and Reinforcement Learning is used to model the user by allowing the system to trade off between exploration and exploitation. This gives users the opportunity to more effectively direct their search nearer, further and following a direction. A task-based user study conducted with 20 participants comparing our system to a traditional query-based baseline indicates that our system significantly improves the effectiveness of information retrieval by providing access to more relevant and novel information without having to spend more time acquiring the information.
探索性和已知条目搜索的技术倾向于只指向更具体的子主题或单个文档,而不允许对信息空间进行定向探索。我们提出了一个交互式信息检索系统,该系统结合了强化学习技术和新颖的用户界面设计,允许用户积极参与指导搜索。用户可以直接操纵文档特征(关键词)来表明他们的兴趣,强化学习通过允许系统在探索和利用之间进行权衡来对用户进行建模。这让用户有机会更有效地引导他们的搜索更近,更远,并遵循一个方向。一项由20名参与者参与的基于任务的用户研究将我们的系统与传统的基于查询的基线进行了比较,结果表明,我们的系统通过提供更多相关和新颖的信息而无需花费更多时间获取信息,显著提高了信息检索的效率。
{"title":"Directing exploratory search: reinforcement learning from user interactions with keywords","authors":"D. Glowacka, Tuukka Ruotsalo, Ksenia Konyushkova, Kumaripaba Athukorala, Samuel Kaski, Giulio Jacucci","doi":"10.1145/2449396.2449413","DOIUrl":"https://doi.org/10.1145/2449396.2449413","url":null,"abstract":"Techniques for both exploratory and known item search tend to direct only to more specific subtopics or individual documents, as opposed to allowing directing the exploration of the information space. We present an interactive information retrieval system that combines Reinforcement Learning techniques along with a novel user interface design to allow active engagement of users in directing the search. Users can directly manipulate document features (keywords) to indicate their interests and Reinforcement Learning is used to model the user by allowing the system to trade off between exploration and exploitation. This gives users the opportunity to more effectively direct their search nearer, further and following a direction. A task-based user study conducted with 20 participants comparing our system to a traditional query-based baseline indicates that our system significantly improves the effectiveness of information retrieval by providing access to more relevant and novel information without having to spend more time acquiring the information.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"18 1","pages":"117-128"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78334273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 116
SmartDCap: semi-automatic capture of higher quality document images from a smartphone SmartDCap:从智能手机半自动捕获更高质量的文档图像
Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449433
Francine Chen, S. Carter, Laurent Denoue, J. Kumar
People frequently capture photos with their smartphones, and some are starting to capture images of documents. However, the quality of captured document images is often lower than expected, even when an application that performs post-processing to improve the image is used. To improve the quality of captured images before post-processing, we developed the Smart Document Capture (SmartDCap) application that provides real-time feedback to users about the likely quality of a captured image. The quality measures capture the sharpness and framing of a page or regions on a page, such as a set of one or more columns, a part of a column, a figure, or a table. Using our approach, while users adjust the camera position, the application automatically determines when to take a picture of a document to produce a good quality result. We performed a subjective evaluation comparing SmartDCap and the Android Ice Cream Sandwich (ICS) camera application; we also used raters to evaluate the quality of the captured images. Our results indicate that users find SmartDCap to be as easy to use as the standard ICS camera application. Also, images captured using SmartDCap are sharper and better framed on average than images using the ICS camera application.
人们经常用智能手机拍摄照片,有些人开始拍摄文件图像。但是,捕获的文档图像的质量通常低于预期,即使使用了执行后处理以改进图像的应用程序也是如此。为了在后期处理之前提高捕获图像的质量,我们开发了智能文档捕获(SmartDCap)应用程序,该应用程序可以向用户提供有关捕获图像可能质量的实时反馈。质量度量捕获页面或页面上区域的清晰度和框架,例如一组或多列、列的一部分、图形或表格。使用我们的方法,当用户调整相机位置时,应用程序自动确定何时拍摄文件以产生高质量的结果。我们对SmartDCap和Android Ice Cream Sandwich (ICS)相机应用进行了主观评价;我们还使用评分器来评估捕获图像的质量。我们的研究结果表明,用户发现SmartDCap与标准ICS相机应用程序一样易于使用。此外,使用SmartDCap捕获的图像比使用ICS相机应用程序捕获的图像更清晰,构图更好。
{"title":"SmartDCap: semi-automatic capture of higher quality document images from a smartphone","authors":"Francine Chen, S. Carter, Laurent Denoue, J. Kumar","doi":"10.1145/2449396.2449433","DOIUrl":"https://doi.org/10.1145/2449396.2449433","url":null,"abstract":"People frequently capture photos with their smartphones, and some are starting to capture images of documents. However, the quality of captured document images is often lower than expected, even when an application that performs post-processing to improve the image is used. To improve the quality of captured images before post-processing, we developed the Smart Document Capture (SmartDCap) application that provides real-time feedback to users about the likely quality of a captured image. The quality measures capture the sharpness and framing of a page or regions on a page, such as a set of one or more columns, a part of a column, a figure, or a table. Using our approach, while users adjust the camera position, the application automatically determines when to take a picture of a document to produce a good quality result. We performed a subjective evaluation comparing SmartDCap and the Android Ice Cream Sandwich (ICS) camera application; we also used raters to evaluate the quality of the captured images. Our results indicate that users find SmartDCap to be as easy to use as the standard ICS camera application. Also, images captured using SmartDCap are sharper and better framed on average than images using the ICS camera application.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"60 1","pages":"287-296"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74655565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
LinkedVis: exploring social and semantic career recommendations LinkedVis:探索社交和语义职业推荐
Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449412
Svetlin Bostandjiev, J. O'Donovan, Tobias Höllerer
This paper presents LinkedVis, an interactive visual recommender system that combines social and semantic knowledge to produce career recommendations based on the LinkedIn API. A collaborative (social) approach is employed to identify professionals with similar career paths and produce personalized recommendations of both companies and roles. To unify semantically identical but lexically distinct entities and arrive at better user models, we employ lightweight natural language processing and entity resolution using semantic information from a variety of end-points on the web. Elements from the underlying recommendation algorithm are exposed through an interactive interface that allows users to manipulate different aspects of the algorithm and the data it operates on, allowing users to explore a variety of "what-if" scenarios around their current profile. We evaluate LinkedVis through leave-one-out accuracy and diversity experiments on a data corpus collected from 47 users and their LinkedIn connections, as well as through a supervised study of 27 users exploring their own profile and recommendations interactively. Results show that our approach outperforms a benchmark recommendation algorithm without semantic resolution in terms of accuracy and diversity, and that the ability to tweak recommendations interactively by adjusting profile item and social connection weights further improves predictive accuracy. Questionnaires on the user experience with the explanatory and interactive aspects of the application reveal very high user acceptance and satisfaction.
本文介绍了LinkedVis,这是一个交互式视觉推荐系统,它结合了社交和语义知识,基于LinkedIn API生成职业推荐。采用协作(社会)方法来识别具有相似职业道路的专业人员,并为公司和角色提供个性化建议。为了统一语义相同但词法不同的实体并获得更好的用户模型,我们采用轻量级的自然语言处理和实体解析,使用来自web上各种端点的语义信息。来自底层推荐算法的元素通过一个交互界面暴露出来,该界面允许用户操纵算法的不同方面及其操作的数据,允许用户围绕他们当前的个人资料探索各种“假设”场景。我们通过对47名用户及其LinkedIn联系人的数据语料进行留一的准确性和多样性实验,以及对27名用户进行监督研究,以交互式方式探索他们自己的个人资料和推荐,来评估LinkedVis。结果表明,我们的方法在准确性和多样性方面优于没有语义解析的基准推荐算法,并且通过调整配置文件项和社会连接权重来交互式调整推荐的能力进一步提高了预测准确性。关于应用程序的解释和交互方面的用户体验的问卷调查显示,用户的接受度和满意度很高。
{"title":"LinkedVis: exploring social and semantic career recommendations","authors":"Svetlin Bostandjiev, J. O'Donovan, Tobias Höllerer","doi":"10.1145/2449396.2449412","DOIUrl":"https://doi.org/10.1145/2449396.2449412","url":null,"abstract":"This paper presents LinkedVis, an interactive visual recommender system that combines social and semantic knowledge to produce career recommendations based on the LinkedIn API. A collaborative (social) approach is employed to identify professionals with similar career paths and produce personalized recommendations of both companies and roles. To unify semantically identical but lexically distinct entities and arrive at better user models, we employ lightweight natural language processing and entity resolution using semantic information from a variety of end-points on the web. Elements from the underlying recommendation algorithm are exposed through an interactive interface that allows users to manipulate different aspects of the algorithm and the data it operates on, allowing users to explore a variety of \"what-if\" scenarios around their current profile. We evaluate LinkedVis through leave-one-out accuracy and diversity experiments on a data corpus collected from 47 users and their LinkedIn connections, as well as through a supervised study of 27 users exploring their own profile and recommendations interactively. Results show that our approach outperforms a benchmark recommendation algorithm without semantic resolution in terms of accuracy and diversity, and that the ability to tweak recommendations interactively by adjusting profile item and social connection weights further improves predictive accuracy. Questionnaires on the user experience with the explanatory and interactive aspects of the application reveal very high user acceptance and satisfaction.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"15 1","pages":"107-116"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83629073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
User-adaptive information visualization: using eye gaze data to infer visualization tasks and user cognitive abilities 用户自适应信息可视化:利用眼睛注视数据推断可视化任务和用户认知能力
Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449439
B. Steichen, G. Carenini, C. Conati
Information Visualization systems have traditionally followed a one-size-fits-all model, typically ignoring an individual user's needs, abilities and preferences. However, recent research has indicated that visualization performance could be improved by adapting aspects of the visualization to each individual user. To this end, this paper presents research aimed at supporting the design of novel user-adaptive visualization systems. In particular, we discuss results on using information on user eye gaze patterns while interacting with a given visualization to predict the user's visualization tasks, as well as user cognitive abilities including perceptual speed, visual working memory, and verbal working memory. We show that such predictions are significantly better than a baseline classifier even during the early stages of visualization usage. These findings are discussed in view of designing visualization systems that can adapt to each individual user in real-time.
信息可视化系统传统上遵循一种“一刀切”的模式,通常忽略了单个用户的需求、能力和偏好。然而,最近的研究表明,可视化性能可以通过调整可视化的各个方面来改善每个用户。为此,本文进行了旨在支持新型用户自适应可视化系统设计的研究。特别是,我们讨论了在与给定可视化交互时使用用户眼睛注视模式信息的结果,以预测用户的可视化任务,以及用户的认知能力,包括感知速度,视觉工作记忆和言语工作记忆。我们表明,即使在可视化使用的早期阶段,这种预测也明显优于基线分类器。这些发现讨论了可视化系统的设计,可以适应每个单独的用户实时。
{"title":"User-adaptive information visualization: using eye gaze data to infer visualization tasks and user cognitive abilities","authors":"B. Steichen, G. Carenini, C. Conati","doi":"10.1145/2449396.2449439","DOIUrl":"https://doi.org/10.1145/2449396.2449439","url":null,"abstract":"Information Visualization systems have traditionally followed a one-size-fits-all model, typically ignoring an individual user's needs, abilities and preferences. However, recent research has indicated that visualization performance could be improved by adapting aspects of the visualization to each individual user. To this end, this paper presents research aimed at supporting the design of novel user-adaptive visualization systems. In particular, we discuss results on using information on user eye gaze patterns while interacting with a given visualization to predict the user's visualization tasks, as well as user cognitive abilities including perceptual speed, visual working memory, and verbal working memory. We show that such predictions are significantly better than a baseline classifier even during the early stages of visualization usage. These findings are discussed in view of designing visualization systems that can adapt to each individual user in real-time.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"21 1","pages":"317-328"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82594134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 150
Automatic and continuous user task analysis via eye activity 通过眼活动自动和连续的用户任务分析
Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449406
Siyuan Chen, J. Epps, Fang Chen
A day in the life of a user can be segmented into a series of tasks: a user begins a task, becomes loaded perceptually and cognitively to some extent by the objects and mental challenge that comprise that task, then at some point switches or is distracted to a new task, and so on. Understanding the contextual task characteristics and user behavior in interaction can benefit the development of intelligent systems to aid user task management. Applications that aid the user in one way or another have proliferated as computing devices become more and more of a constant companion. However, direct and continuous observations of individual tasks in a naturalistic context and subsequent task analysis, for example the diary method, have traditionally been a manual process. We propose a method for automatic task analysis system, which monitors the user's current task and analyzes it in terms of the task transition, and perceptual and cognitive load imposed by the task. An experiment was conducted in which participants were required to work continuously on groups of three sequential tasks of different types. Three classes of eye activity, namely pupillary response, blink and eye movement, were analyzed to detect the task transition and non-transition states, and to estimate three levels of perceptual load and three levels of cognitive load every second to infer task characteristics. This paper reports statistically significant classification accuracies in all cases and demonstrates the feasibility of this approach for task monitoring and analysis.
用户生命中的一天可以被划分为一系列任务:用户开始一个任务,在某种程度上被组成该任务的对象和精神挑战所负载,然后在某个时候切换或分散到一个新任务,等等。了解上下文任务特征和交互中的用户行为有助于开发智能系统来辅助用户任务管理。随着计算设备越来越成为用户的固定伴侣,以这样或那样的方式帮助用户的应用程序已经激增。然而,在自然环境中对单个任务的直接和连续观察以及随后的任务分析,例如日记法,传统上是一个手工过程。我们提出了一种自动任务分析系统的方法,该系统监测用户当前的任务,并从任务转换、任务所带来的感知和认知负荷等方面对其进行分析。在一项实验中,参与者被要求连续完成三组不同类型的连续任务。通过分析瞳孔反应、眨眼和眼动这三类眼动来检测任务的过渡状态和非过渡状态,并估计每秒的三种感知负荷和三种认知负荷来推断任务特征。本文报告了在所有情况下的统计显著分类准确性,并证明了该方法用于任务监测和分析的可行性。
{"title":"Automatic and continuous user task analysis via eye activity","authors":"Siyuan Chen, J. Epps, Fang Chen","doi":"10.1145/2449396.2449406","DOIUrl":"https://doi.org/10.1145/2449396.2449406","url":null,"abstract":"A day in the life of a user can be segmented into a series of tasks: a user begins a task, becomes loaded perceptually and cognitively to some extent by the objects and mental challenge that comprise that task, then at some point switches or is distracted to a new task, and so on. Understanding the contextual task characteristics and user behavior in interaction can benefit the development of intelligent systems to aid user task management. Applications that aid the user in one way or another have proliferated as computing devices become more and more of a constant companion. However, direct and continuous observations of individual tasks in a naturalistic context and subsequent task analysis, for example the diary method, have traditionally been a manual process. We propose a method for automatic task analysis system, which monitors the user's current task and analyzes it in terms of the task transition, and perceptual and cognitive load imposed by the task. An experiment was conducted in which participants were required to work continuously on groups of three sequential tasks of different types. Three classes of eye activity, namely pupillary response, blink and eye movement, were analyzed to detect the task transition and non-transition states, and to estimate three levels of perceptual load and three levels of cognitive load every second to infer task characteristics. This paper reports statistically significant classification accuracies in all cases and demonstrates the feasibility of this approach for task monitoring and analysis.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"194 1","pages":"57-66"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76876068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Helping users with information disclosure decisions: potential for adaptation 帮助用户做出信息披露决策:适应的潜力
Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449448
Bart P. Knijnenburg, A. Kobsa
Personalization relies on personal data about each individual user. Users are quite often reluctant though to disclose information about themselves and to be "tracked" by a system. We investigated whether different types of rationales (justifications) for disclosure that have been suggested in the privacy literature would increase users' willingness to divulge demographic and contextual information about themselves, and would raise their satisfaction with the system. We also looked at the effect of the order of requests, owing to findings from the literature. Our experiment with a mockup of a mobile app recommender shows that there is no single strategy that is optimal for everyone. Heuristics can be defined though that select for each user the most effective justification to raise disclosure or satisfaction, taking the user's gender, disclosure tendency, and the type of solicited personal information into account. We discuss the implications of these findings for research aimed at personalizing privacy strategies to each individual user.
个性化依赖于每个用户的个人数据。用户通常不愿意透露自己的信息,也不愿意被系统“跟踪”。我们调查了在隐私文献中提出的不同类型的披露理由(理由)是否会增加用户泄露自己的人口统计和上下文信息的意愿,并提高他们对系统的满意度。根据文献中的发现,我们还研究了请求顺序的影响。我们对手机应用推荐模型的实验表明,不存在适合所有人的最佳策略。启发式可以定义为为每个用户选择最有效的理由来提高披露或满意度,考虑到用户的性别,披露倾向和征求个人信息的类型。我们将讨论这些发现对研究的意义,这些研究旨在为每个用户个性化隐私策略。
{"title":"Helping users with information disclosure decisions: potential for adaptation","authors":"Bart P. Knijnenburg, A. Kobsa","doi":"10.1145/2449396.2449448","DOIUrl":"https://doi.org/10.1145/2449396.2449448","url":null,"abstract":"Personalization relies on personal data about each individual user. Users are quite often reluctant though to disclose information about themselves and to be \"tracked\" by a system. We investigated whether different types of rationales (justifications) for disclosure that have been suggested in the privacy literature would increase users' willingness to divulge demographic and contextual information about themselves, and would raise their satisfaction with the system. We also looked at the effect of the order of requests, owing to findings from the literature. Our experiment with a mockup of a mobile app recommender shows that there is no single strategy that is optimal for everyone. Heuristics can be defined though that select for each user the most effective justification to raise disclosure or satisfaction, taking the user's gender, disclosure tendency, and the type of solicited personal information into account. We discuss the implications of these findings for research aimed at personalizing privacy strategies to each individual user.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"44 1","pages":"407-416"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80641701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Team reactions to voiced agent instructions in a pervasive game 在一个普遍的游戏中,团队对语音代理指令的反应
Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449445
Stuart Moran, Nadia Pantidi, K. Bachour, J. Fischer, Martin Flintham, T. Rodden, Simon Evans, Simon Johnson
The assumed role of humans as controllers and instructors of machines is changing. As systems become more complex and incomprehensible to humans, it will be increasingly necessary for us to place confidence in intelligent interfaces and follow their instructions and recommendations. This type of relationship becomes particularly intricate when we consider significant numbers of humans and agents working together in collectives. While instruction-based interfaces and agents already exist, our understanding of them within the field of Human-Computer Interaction is still limited. As such, we developed a large-scale pervasive game called 'Cargo', where a semi-autonomous ruled-based agent distributes a number of text-to-speech instructions to multiple teams of players via their mobile phone as an interface. We describe how people received, negotiated and acted upon the instructions in the game both individually and as a team and how players initial plans and expectations shaped their understanding of the instructions.
人类作为机器的控制者和指导者的角色正在发生变化。随着系统变得越来越复杂,对人类来说越来越难以理解,我们越来越有必要对智能界面充满信心,并遵循它们的指示和建议。当我们考虑到大量的人类和代理在集体中一起工作时,这种类型的关系变得特别复杂。虽然基于指令的界面和代理已经存在,但我们对它们在人机交互领域的理解仍然有限。因此,我们开发了一款名为《Cargo》的大规模普及游戏,其中一个半自主的基于规则的代理通过手机作为界面向多个团队的玩家分发大量文本到语音的指令。我们描述了人们如何在游戏中接受、协商和执行指令(无论是个人还是团队),以及玩家最初的计划和期望如何影响他们对指令的理解。
{"title":"Team reactions to voiced agent instructions in a pervasive game","authors":"Stuart Moran, Nadia Pantidi, K. Bachour, J. Fischer, Martin Flintham, T. Rodden, Simon Evans, Simon Johnson","doi":"10.1145/2449396.2449445","DOIUrl":"https://doi.org/10.1145/2449396.2449445","url":null,"abstract":"The assumed role of humans as controllers and instructors of machines is changing. As systems become more complex and incomprehensible to humans, it will be increasingly necessary for us to place confidence in intelligent interfaces and follow their instructions and recommendations. This type of relationship becomes particularly intricate when we consider significant numbers of humans and agents working together in collectives. While instruction-based interfaces and agents already exist, our understanding of them within the field of Human-Computer Interaction is still limited.\u0000 As such, we developed a large-scale pervasive game called 'Cargo', where a semi-autonomous ruled-based agent distributes a number of text-to-speech instructions to multiple teams of players via their mobile phone as an interface. We describe how people received, negotiated and acted upon the instructions in the game both individually and as a team and how players initial plans and expectations shaped their understanding of the instructions.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"14 1","pages":"371-382"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85990005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Mind the gap: collecting commonsense data about simple experiences 注意差距:收集关于简单体验的常识性数据
Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449421
J. Weltman, S. S. Iyengar, Michael Hegarty
In natural language, there are many gaps between what is stated and what is understood. Speakers and listeners fill in these gaps, presumably from some life experience, but no one knows how to get this experiential data into a computer. As a first step, we have created a methodology and software interface for collecting commonsense data about simple experiences. This work is intended to form the basis of a new resource for natural language processing. We model experience as a sequence of comic frames, annotated with the changing intentional and physical states of the characters and objects. To create an annotated experience, our software interface guides non-experts in identifying facts about experiences that humans normally take for granted. As part of this process, the system asks questions using the Socratic Method to help users notice difficult-to-articulate commonsense data. A test on ten subjects indicates that non-experts are able to produce high quality experiential data.
在自然语言中,陈述的内容和理解的内容之间存在许多差距。演讲者和听众填补了这些空白,可能是来自一些生活经验,但没有人知道如何将这些经验数据输入计算机。作为第一步,我们创建了一种方法和软件界面,用于收集有关简单体验的常识性数据。这项工作旨在为自然语言处理的新资源奠定基础。我们将体验建模为一系列漫画框架,并以角色和对象不断变化的意图和物理状态进行注释。为了创造一个带注释的体验,我们的软件界面引导非专家识别人类通常认为理所当然的体验事实。作为这个过程的一部分,系统使用苏格拉底方法提出问题,帮助用户注意到难以表达的常识性数据。对10个科目的测试表明,非专家能够产生高质量的经验数据。
{"title":"Mind the gap: collecting commonsense data about simple experiences","authors":"J. Weltman, S. S. Iyengar, Michael Hegarty","doi":"10.1145/2449396.2449421","DOIUrl":"https://doi.org/10.1145/2449396.2449421","url":null,"abstract":"In natural language, there are many gaps between what is stated and what is understood. Speakers and listeners fill in these gaps, presumably from some life experience, but no one knows how to get this experiential data into a computer. As a first step, we have created a methodology and software interface for collecting commonsense data about simple experiences. This work is intended to form the basis of a new resource for natural language processing.\u0000 We model experience as a sequence of comic frames, annotated with the changing intentional and physical states of the characters and objects. To create an annotated experience, our software interface guides non-experts in identifying facts about experiences that humans normally take for granted. As part of this process, the system asks questions using the Socratic Method to help users notice difficult-to-articulate commonsense data. A test on ten subjects indicates that non-experts are able to produce high quality experiential data.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"24 1","pages":"179-190"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84472222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Recommendation system for automatic design of magazine covers 杂志封面自动设计推荐系统
Pub Date : 2013-03-19 DOI: 10.1145/2449396.2449411
Ali Jahanian, Jerry Liu, Qian Lin, D. Tretter, Eamonn O'Brien-Strain, S. Lee, Nic Lyons, J. Allebach
In this paper, we present a recommendation system for the automatic design of magazine covers. Our users are non-designer designers: individuals or small and medium businesses who want to design without hiring a professional designer while still wanting to create aesthetically compelling designs. Because a design should have a purpose, we suggest a number of semantic features to the user, e.g., "clean and clear," "dynamic and active," or "formal," to describe the color mood for the purpose of his/her design. Based on these high level features and a number of low level features, such as the complexity of the visual balance in a photo, our system selects the best photos from the user's album for his/her design. Our system then generates several alternative designs that can be rated by the user. Consequently, our system generates future designs based on the user's style. In this fashion, our system personalizes the designs of a user based on his/her preferences.
本文提出了一种用于杂志封面自动设计的推荐系统。我们的用户是非设计师设计师:个人或中小型企业谁想要设计没有聘请专业设计师,但仍然想要创造美观的引人注目的设计。因为设计应该有一个目的,我们向用户建议了一些语义特征,例如,“干净和清晰”,“动态和活跃”或“正式”,以描述他/她的设计目的的颜色情绪。基于这些高级特征和一些低级特征,例如照片中视觉平衡的复杂性,我们的系统从用户的相册中选择最好的照片进行设计。然后,我们的系统生成几个可供用户评价的备选设计。因此,我们的系统根据用户的风格生成未来的设计。通过这种方式,我们的系统可以根据用户的喜好进行个性化设计。
{"title":"Recommendation system for automatic design of magazine covers","authors":"Ali Jahanian, Jerry Liu, Qian Lin, D. Tretter, Eamonn O'Brien-Strain, S. Lee, Nic Lyons, J. Allebach","doi":"10.1145/2449396.2449411","DOIUrl":"https://doi.org/10.1145/2449396.2449411","url":null,"abstract":"In this paper, we present a recommendation system for the automatic design of magazine covers. Our users are non-designer designers: individuals or small and medium businesses who want to design without hiring a professional designer while still wanting to create aesthetically compelling designs. Because a design should have a purpose, we suggest a number of semantic features to the user, e.g., \"clean and clear,\" \"dynamic and active,\" or \"formal,\" to describe the color mood for the purpose of his/her design. Based on these high level features and a number of low level features, such as the complexity of the visual balance in a photo, our system selects the best photos from the user's album for his/her design. Our system then generates several alternative designs that can be rated by the user. Consequently, our system generates future designs based on the user's style. In this fashion, our system personalizes the designs of a user based on his/her preferences.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":"9 1","pages":"95-106"},"PeriodicalIF":0.0,"publicationDate":"2013-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84250183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
期刊
IUI. International Conference on Intelligent User Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1