首页 > 最新文献

IUI. International Conference on Intelligent User Interfaces最新文献

英文 中文
Clustering web pages to facilitate revisitation on mobile devices 聚集网页以方便在移动设备上的重新访问
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2167010
Jie Liu, Chun Yu, Wenchang Xu, Yuanchun Shi
Due to small screens, inaccuracy of input and other limitations of mobile devices, revisitation of Web pages in mobile browsers takes more time than that in desktop browsers. In this paper, we propose a novel approach to facilitate revisitation. We designed AutoWeb, a system that clusters opened Web pages into different topics based on their contents. Users can quickly find a desired opened Web page by narrowing down the searching scope to a group of Web pages that share the same topic. Clustering accuracy is evaluated to be 92.4% and computing resource consumption was proved to be acceptable. A user study was conducted to explore user experience and how much AutoWeb facilitates revisitation. Results showed that AutoWeb could save up a significant time for revisitation and participants rated the system highly.
由于移动设备的屏幕小、输入不准确和其他限制,在移动浏览器中重新访问Web页面比在桌面浏览器中花费更多的时间。在本文中,我们提出了一种新的方法来促进重温。我们设计了AutoWeb,这是一个基于内容将打开的网页分成不同主题的系统。通过将搜索范围缩小到共享相同主题的一组Web页面,用户可以快速找到想要打开的Web页面。聚类精度达到92.4%,计算资源消耗是可以接受的。进行了一项用户研究,以探索用户体验以及AutoWeb在多大程度上促进了重新访问。结果表明,AutoWeb可以节省大量的复习时间,参与者对该系统的评价很高。
{"title":"Clustering web pages to facilitate revisitation on mobile devices","authors":"Jie Liu, Chun Yu, Wenchang Xu, Yuanchun Shi","doi":"10.1145/2166966.2167010","DOIUrl":"https://doi.org/10.1145/2166966.2167010","url":null,"abstract":"Due to small screens, inaccuracy of input and other limitations of mobile devices, revisitation of Web pages in mobile browsers takes more time than that in desktop browsers. In this paper, we propose a novel approach to facilitate revisitation. We designed AutoWeb, a system that clusters opened Web pages into different topics based on their contents. Users can quickly find a desired opened Web page by narrowing down the searching scope to a group of Web pages that share the same topic. Clustering accuracy is evaluated to be 92.4% and computing resource consumption was proved to be acceptable. A user study was conducted to explore user experience and how much AutoWeb facilitates revisitation. Results showed that AutoWeb could save up a significant time for revisitation and participants rated the system highly.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90114646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Collecting multimodal data in the wild 在野外收集多模式数据
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2167042
Michael Johnston, Patrick Ehlen
Multimodal interaction allows users to specify commands using combinations of inputs from multiple different modalities. For example, in a local search application, a user might say "gas stations" while simultaneously tracing a route on a touchscreen display. In this demonstration, we describe the extension of our cloud-based speech recognition architecture to a Multimodal Semantic Interpretation System (MSIS) that supports processing of multimodal inputs streamed over HTTP. We illustrate the capabilities of the framework using Speak4itSM, a deployed mobile local search application supporting combined speech and gesture input. We provide interactive demonstrations of Speak4it on the iPhone and iPad and explain the challenges of supporting true multimodal interaction in a deployed mobile service.
多模态交互允许用户使用来自多个不同模态的输入组合来指定命令。例如,在本地搜索应用程序中,用户可能会说“加油站”,同时在触摸屏显示器上跟踪路线。在本演示中,我们描述了将基于云的语音识别体系结构扩展到支持处理HTTP上的多模态输入流的多模态语义解释系统(MSIS)。我们使用Speak4itSM来说明该框架的功能,Speak4itSM是一个部署的移动本地搜索应用程序,支持组合语音和手势输入。我们提供了Speak4it在iPhone和iPad上的交互式演示,并解释了在部署的移动服务中支持真正的多模式交互的挑战。
{"title":"Collecting multimodal data in the wild","authors":"Michael Johnston, Patrick Ehlen","doi":"10.1145/2166966.2167042","DOIUrl":"https://doi.org/10.1145/2166966.2167042","url":null,"abstract":"Multimodal interaction allows users to specify commands using combinations of inputs from multiple different modalities. For example, in a local search application, a user might say \"gas stations\" while simultaneously tracing a route on a touchscreen display. In this demonstration, we describe the extension of our cloud-based speech recognition architecture to a Multimodal Semantic Interpretation System (MSIS) that supports processing of multimodal inputs streamed over HTTP. We illustrate the capabilities of the framework using Speak4itSM, a deployed mobile local search application supporting combined speech and gesture input. We provide interactive demonstrations of Speak4it on the iPhone and iPad and explain the challenges of supporting true multimodal interaction in a deployed mobile service.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74181134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Machine listening: acoustic interface with ART 机器聆听:与ART的声学接口
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2167021
Benjamin D. Smith, Guy E. Garnett
Recent developments in machine listening present opportunities for innovative new paradigms for computer-human interaction. Voice recognition systems demonstrate a typical approach that conforms to event oriented control models. However, acoustic sound is continuous, and highly dimensional, presenting a rich medium for computer interaction. Unsupervised machine learning models present great potential for real-time machine listening and understanding of audio and sound data. We propose a method for harnessing unsupervised machine learning algorithms, Adaptive Resonance Theory specifically, in order to inform machine listening, build musical context information, and drive real-time interactive performance systems. We present the design and evaluation of this model leveraging the expertise of trained, improvising musicians.
机器聆听的最新发展为人机交互的创新范式提供了机会。语音识别系统展示了一种符合面向事件控制模型的典型方法。然而,声音是连续的,高度维度的,为计算机交互提供了丰富的媒介。无监督机器学习模型为实时机器聆听和理解音频和声音数据提供了巨大的潜力。我们提出了一种利用无监督机器学习算法的方法,特别是自适应共振理论,以告知机器聆听,构建音乐上下文信息,并驱动实时交互式表演系统。我们利用训练有素的即兴音乐家的专业知识,提出了这个模型的设计和评估。
{"title":"Machine listening: acoustic interface with ART","authors":"Benjamin D. Smith, Guy E. Garnett","doi":"10.1145/2166966.2167021","DOIUrl":"https://doi.org/10.1145/2166966.2167021","url":null,"abstract":"Recent developments in machine listening present opportunities for innovative new paradigms for computer-human interaction. Voice recognition systems demonstrate a typical approach that conforms to event oriented control models. However, acoustic sound is continuous, and highly dimensional, presenting a rich medium for computer interaction. Unsupervised machine learning models present great potential for real-time machine listening and understanding of audio and sound data. We propose a method for harnessing unsupervised machine learning algorithms, Adaptive Resonance Theory specifically, in order to inform machine listening, build musical context information, and drive real-time interactive performance systems. We present the design and evaluation of this model leveraging the expertise of trained, improvising musicians.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84537987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Virtual stage linked with a physical miniature stage to support multiple users in planning theatrical productions 与物理微型舞台相连接的虚拟舞台,支持多个用户规划戏剧作品
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2166989
Yosuke Horiuchi, T. Inoue, Ken-ichi Okada
Theater is a collaborative art form that involves production team members with different specialties. Because theater involves various technical elements, such as stage design and lighting, the production team must work in cooperation among various departments to design a theatrical production. When planning a theatrical production, it is difficult to visualize the stage as a whole and to incorporate the ideas of production team members from various departments. In this paper, we propose a system for reproducing the theatrical stage by means of a virtual stage linked to a physical miniature stage. The miniature stage is presented on a tabletop interface, and the virtual stage is created by computer graphics to reflect the actions on the miniature stage in real time. By actually presenting theatrical production ideas in two spaces, users can more easily collaborate and gain a comprehensive view of the stage.
戏剧是一种合作的艺术形式,涉及不同专业的制作团队成员。因为戏剧涉及到各种技术元素,如舞台设计和灯光,制作团队必须在各个部门之间合作来设计戏剧作品。在策划戏剧制作时,很难将舞台作为一个整体来想象,也很难将各个部门的制作团队成员的想法结合起来。在本文中,我们提出了一种通过虚拟舞台与物理微型舞台相连接的方式来再现戏剧舞台的系统。微型舞台呈现在桌面界面上,通过计算机图形学创建虚拟舞台,实时反映微型舞台上的动作。通过在两个空间中实际呈现戏剧制作理念,用户可以更容易地协作并获得舞台的全面视图。
{"title":"Virtual stage linked with a physical miniature stage to support multiple users in planning theatrical productions","authors":"Yosuke Horiuchi, T. Inoue, Ken-ichi Okada","doi":"10.1145/2166966.2166989","DOIUrl":"https://doi.org/10.1145/2166966.2166989","url":null,"abstract":"Theater is a collaborative art form that involves production team members with different specialties. Because theater involves various technical elements, such as stage design and lighting, the production team must work in cooperation among various departments to design a theatrical production. When planning a theatrical production, it is difficult to visualize the stage as a whole and to incorporate the ideas of production team members from various departments. In this paper, we propose a system for reproducing the theatrical stage by means of a virtual stage linked to a physical miniature stage. The miniature stage is presented on a tabletop interface, and the virtual stage is created by computer graphics to reflect the actions on the miniature stage in real time. By actually presenting theatrical production ideas in two spaces, users can more easily collaborate and gain a comprehensive view of the stage.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85418373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
In-vehicle driver recognition based on hand ECG signals 基于手心电信号的车载驾驶员识别
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2166971
H. Silva, A. Lourenço, A. Fred
We present a system for in-vehicle driver recognition based on biometric information extracted from electrocardiographic (ECG) signals collected at the hands. We recur to non-intrusive techniques, that are easy to integrate into components with which the driver naturally interacts with, such as the steering wheel. This system is applicable to the automatic customization of vehicle settings according to the perceived driver, being also prone to expand the security features of the vehicle through the detection of hands-off steering wheel events in a continuous or near-continuous manner. We have performed randomized tests for performance evaluation of the system, in a subject identification scenario, using closed sets of up to 5 subjects, showing promising results for the intended application.
提出了一种基于手电信号提取生物特征信息的车载驾驶员识别系统。我们反复使用非侵入式技术,这些技术很容易集成到驾驶员自然与之交互的组件中,例如方向盘。该系统适用于根据感知到的驾驶员自动定制车辆设置,也容易通过连续或近连续的方式检测方向盘脱离事件来扩展车辆的安全功能。在受试者识别场景中,我们使用多达5个受试者的封闭集对系统进行了随机测试,以评估系统的性能,显示出预期应用的良好结果。
{"title":"In-vehicle driver recognition based on hand ECG signals","authors":"H. Silva, A. Lourenço, A. Fred","doi":"10.1145/2166966.2166971","DOIUrl":"https://doi.org/10.1145/2166966.2166971","url":null,"abstract":"We present a system for in-vehicle driver recognition based on biometric information extracted from electrocardiographic (ECG) signals collected at the hands. We recur to non-intrusive techniques, that are easy to integrate into components with which the driver naturally interacts with, such as the steering wheel. This system is applicable to the automatic customization of vehicle settings according to the perceived driver, being also prone to expand the security features of the vehicle through the detection of hands-off steering wheel events in a continuous or near-continuous manner. We have performed randomized tests for performance evaluation of the system, in a subject identification scenario, using closed sets of up to 5 subjects, showing promising results for the intended application.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82195095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Continuous recognition of one-handed and two-handed gestures using 3D full-body motion tracking sensors 使用3D全身运动跟踪传感器的单手和双手手势的连续识别
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2166983
P. Kristensson, Thomas Nicholson, A. Quigley
In this paper we present a new bimanual markerless gesture interface for 3D full-body motion tracking sensors, such as the Kinect. Our interface uses a probabilistic algorithm to incrementally predict users' intended one-handed and twohanded gestures while they are still being articulated. It supports scale and translation invariant recognition of arbitrarily defined gesture templates in real-time. The interface supports two ways of gesturing commands in thin air to displays at a distance. First, users can use one-handed and two-handed gestures to directly issue commands. Second, users can use their non-dominant hand to modulate single-hand gestures. Our evaluation shows that the system recognizes one-handed and two-handed gestures with an accuracy of 92.7%--96.2%.
本文提出了一种新的用于三维全身运动跟踪传感器(如Kinect)的无标记手势界面。我们的界面使用概率算法来增量预测用户的单手和双手手势,而他们仍然是清晰的。它支持对任意定义的手势模板进行实时缩放和平移不变识别。该界面支持两种方式的手势命令在稀薄的空气中显示在远处。首先,用户可以使用单手和双手手势直接发出命令。其次,用户可以使用他们的非惯用手来调整单手手势。我们的评估表明,该系统识别单手和双手手势的准确率为92.7%- 96.2%。
{"title":"Continuous recognition of one-handed and two-handed gestures using 3D full-body motion tracking sensors","authors":"P. Kristensson, Thomas Nicholson, A. Quigley","doi":"10.1145/2166966.2166983","DOIUrl":"https://doi.org/10.1145/2166966.2166983","url":null,"abstract":"In this paper we present a new bimanual markerless gesture interface for 3D full-body motion tracking sensors, such as the Kinect. Our interface uses a probabilistic algorithm to incrementally predict users' intended one-handed and twohanded gestures while they are still being articulated. It supports scale and translation invariant recognition of arbitrarily defined gesture templates in real-time. The interface supports two ways of gesturing commands in thin air to displays at a distance. First, users can use one-handed and two-handed gestures to directly issue commands. Second, users can use their non-dominant hand to modulate single-hand gestures. Our evaluation shows that the system recognizes one-handed and two-handed gestures with an accuracy of 92.7%--96.2%.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84707548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
On slide-based contextual cues for presentation reuse 用于演示重用的基于幻灯片的上下文提示
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2166992
Moushumi Sharmin, L. Bergman, Jie Lu, Ravi B. Konuru
Reuse of existing presentation materials is prevalent among knowledge workers. However, finding the most appropriate material for reuse is challenging. Existing information management and search tools provide inadequate support for reuse due to their dependence on users' ability to effectively categorize, recall, and recognize existing materials. Based on our findings from an online survey and contextual interviews, we designed and implemented a slide-based contextual recommender, ConReP, for supporting reuse of presentation materials. ConReP utilizes a user-selected slide as a search-key, recommends materials based on similarity to the selected slide, and provides a local-context-based visual representation of the recommendations. Users input provides new insight into presentation reuse and reveals that slide-based search is more effective than keyword-based search, local-context-based visual representation helps in better recall and recognition, and shows the promise of this general approach of exploiting individual slides and local-context for better presentation reuse.
重用现有的演示材料在知识工作者中很普遍。然而,找到最合适的材料进行再利用是一项挑战。现有的信息管理和搜索工具对重用的支持不足,因为它们依赖于用户有效分类、召回和识别现有材料的能力。基于在线调查和上下文访谈的结果,我们设计并实现了一个基于幻灯片的上下文推荐器ConReP,以支持演示材料的重用。ConReP利用用户选择的幻灯片作为搜索关键字,根据与所选幻灯片的相似度推荐材料,并提供基于本地上下文的推荐的可视化表示。用户输入为演示文稿重用提供了新的见解,并揭示了基于幻灯片的搜索比基于关键字的搜索更有效,基于本地上下文的视觉表示有助于更好地回忆和识别,并展示了利用单个幻灯片和本地上下文的通用方法的前景,以实现更好的演示文稿重用。
{"title":"On slide-based contextual cues for presentation reuse","authors":"Moushumi Sharmin, L. Bergman, Jie Lu, Ravi B. Konuru","doi":"10.1145/2166966.2166992","DOIUrl":"https://doi.org/10.1145/2166966.2166992","url":null,"abstract":"Reuse of existing presentation materials is prevalent among knowledge workers. However, finding the most appropriate material for reuse is challenging. Existing information management and search tools provide inadequate support for reuse due to their dependence on users' ability to effectively categorize, recall, and recognize existing materials. Based on our findings from an online survey and contextual interviews, we designed and implemented a slide-based contextual recommender, ConReP, for supporting reuse of presentation materials. ConReP utilizes a user-selected slide as a search-key, recommends materials based on similarity to the selected slide, and provides a local-context-based visual representation of the recommendations. Users input provides new insight into presentation reuse and reveals that slide-based search is more effective than keyword-based search, local-context-based visual representation helps in better recall and recognition, and shows the promise of this general approach of exploiting individual slides and local-context for better presentation reuse.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77832084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
1st international workshop on user modeling from social media 第一届社交媒体用户建模国际研讨会
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2167058
J. Mahmud, Jeffrey Nichols, Michelle X. Zhou
Massive amounts of data are being generated on social media sites, such as Twitter and Facebook. People from all walks of life share data about social events, express opinions, discuss their interests, publicize businesses, recommend products, and, explicitly or implicitly, reveal personal information. This workshop will focus on the use of social media data for creating models of individual users from the content that they publish. Deeper understanding of user behavior and associated attributes can benefit a wide range of intelligent applications, such as social recommender systems and expert finders, as well as provide the foundation in support of novel user interfaces (e.g., actively engaging the crowd in mixed-initiative question-answering systems). These applications and interfaces may offer significant benefits to users across a wide variety of domains, such as retail, government, healthcare and education. User modeling from public social media data may also reveal information that users would prefer to keep private. Such concerns are particularly important because individuals do not have complete control over the information they share about themselves. For example, friends of a user may inadvertently divulge private information about that user in their own posts. In this workshop we will also discuss possible mechanisms that users might employ to monitor what information has been revealed about themselves on social media and obfuscate any sensitive information that has been accidentally revealed.
Twitter和Facebook等社交媒体网站正在产生大量数据。各行各业的人们分享有关社会事件的数据,表达意见,讨论他们的兴趣,宣传业务,推荐产品,并或明或暗地透露个人信息。本次研讨会将重点讨论如何使用社交媒体数据,根据个人用户发布的内容创建他们的模型。对用户行为和相关属性的更深入理解可以有利于广泛的智能应用程序,例如社会推荐系统和专家查找器,以及为支持新颖的用户界面(例如,在混合主动问答系统中积极参与人群)提供基础。这些应用程序和接口可以为零售、政府、医疗保健和教育等广泛领域的用户提供显著的好处。基于公共社交媒体数据的用户建模也可能揭示用户希望保密的信息。这种担忧尤其重要,因为个人无法完全控制他们分享的关于自己的信息。例如,用户的朋友可能在他们自己的帖子中无意中泄露了该用户的私人信息。在本次研讨会中,我们还将讨论用户可能采用的机制,以监控社交媒体上泄露的有关自己的信息,并混淆意外泄露的任何敏感信息。
{"title":"1st international workshop on user modeling from social media","authors":"J. Mahmud, Jeffrey Nichols, Michelle X. Zhou","doi":"10.1145/2166966.2167058","DOIUrl":"https://doi.org/10.1145/2166966.2167058","url":null,"abstract":"Massive amounts of data are being generated on social media sites, such as Twitter and Facebook. People from all walks of life share data about social events, express opinions, discuss their interests, publicize businesses, recommend products, and, explicitly or implicitly, reveal personal information. This workshop will focus on the use of social media data for creating models of individual users from the content that they publish. Deeper understanding of user behavior and associated attributes can benefit a wide range of intelligent applications, such as social recommender systems and expert finders, as well as provide the foundation in support of novel user interfaces (e.g., actively engaging the crowd in mixed-initiative question-answering systems). These applications and interfaces may offer significant benefits to users across a wide variety of domains, such as retail, government, healthcare and education. User modeling from public social media data may also reveal information that users would prefer to keep private. Such concerns are particularly important because individuals do not have complete control over the information they share about themselves. For example, friends of a user may inadvertently divulge private information about that user in their own posts. In this workshop we will also discuss possible mechanisms that users might employ to monitor what information has been revealed about themselves on social media and obfuscate any sensitive information that has been accidentally revealed.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80903008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simple, fast, and accurate clustering of data sequences 简单、快速、准确的数据序列聚类
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2167027
Luis A. Leiva, E. Vidal
Many devices generate large amounts of data that follow some sort of sequentiality, e.g., motion sensors, e-pens, or eye trackers, and therefore these data often need to be compressed for classification, storage, and/or retrieval purposes. This paper introduces a simple, accurate, and extremely fast technique inspired by the well-known K-means algorithm to properly cluster sequential data. We illustrate the feasibility of our algorithm on a web-based prototype that works with trajectories derived from mouse and touch input. As can be observed, our proposal outperforms the classical K-means algorithm in terms of accuracy (better, well-formed segmentations) and performance (less computation time).
许多设备产生大量的数据,遵循某种顺序,例如,运动传感器,电子笔,或眼动仪,因此这些数据通常需要压缩分类,存储,和/或检索目的。本文介绍了一种受著名的K-means算法启发的简单、准确和极快的技术来正确地聚类序列数据。我们在一个基于网络的原型上说明了我们算法的可行性,该原型与来自鼠标和触摸输入的轨迹一起工作。可以观察到,我们的建议在精度(更好,格式良好的分割)和性能(更少的计算时间)方面优于经典的K-means算法。
{"title":"Simple, fast, and accurate clustering of data sequences","authors":"Luis A. Leiva, E. Vidal","doi":"10.1145/2166966.2167027","DOIUrl":"https://doi.org/10.1145/2166966.2167027","url":null,"abstract":"Many devices generate large amounts of data that follow some sort of sequentiality, e.g., motion sensors, e-pens, or eye trackers, and therefore these data often need to be compressed for classification, storage, and/or retrieval purposes. This paper introduces a simple, accurate, and extremely fast technique inspired by the well-known K-means algorithm to properly cluster sequential data. We illustrate the feasibility of our algorithm on a web-based prototype that works with trajectories derived from mouse and touch input. As can be observed, our proposal outperforms the classical K-means algorithm in terms of accuracy (better, well-formed segmentations) and performance (less computation time).","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86694780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Probabilistic pointing target prediction via inverse optimal control 基于逆最优控制的概率指向目标预测
Pub Date : 2012-02-14 DOI: 10.1145/2166966.2166968
Brian D. Ziebart, Anind Dey, J. Andrew Bagnell
Numerous interaction techniques have been developed that make "virtual" pointing at targets in graphical user interfaces easier than analogous physical pointing tasks by invoking target-based interface modifications. These pointing facilitation techniques crucially depend on methods for estimating the relevance of potential targets. Unfortunately, many of the simple methods employed to date are inaccurate in common settings with many selectable targets in close proximity. In this paper, we bring recent advances in statistical machine learning to bear on this underlying target relevance estimation problem. By framing past target-driven pointing trajectories as approximate solutions to well-studied control problems, we learn the probabilistic dynamics of pointing trajectories that enable more accurate predictions of intended targets.
已经开发了许多交互技术,通过调用基于目标的接口修改,使图形用户界面中的“虚拟”指向目标比类似的物理指向任务更容易。这些指向促进技术主要依赖于估计潜在目标相关性的方法。不幸的是,迄今为止采用的许多简单方法在许多可选择的近距离目标的常见设置中是不准确的。在本文中,我们引入了统计机器学习的最新进展来解决这个潜在的目标相关性估计问题。通过将过去的目标驱动的指向轨迹作为已得到充分研究的控制问题的近似解,我们了解了指向轨迹的概率动力学,从而能够更准确地预测预期目标。
{"title":"Probabilistic pointing target prediction via inverse optimal control","authors":"Brian D. Ziebart, Anind Dey, J. Andrew Bagnell","doi":"10.1145/2166966.2166968","DOIUrl":"https://doi.org/10.1145/2166966.2166968","url":null,"abstract":"Numerous interaction techniques have been developed that make \"virtual\" pointing at targets in graphical user interfaces easier than analogous physical pointing tasks by invoking target-based interface modifications. These pointing facilitation techniques crucially depend on methods for estimating the relevance of potential targets. Unfortunately, many of the simple methods employed to date are inaccurate in common settings with many selectable targets in close proximity. In this paper, we bring recent advances in statistical machine learning to bear on this underlying target relevance estimation problem. By framing past target-driven pointing trajectories as approximate solutions to well-studied control problems, we learn the probabilistic dynamics of pointing trajectories that enable more accurate predictions of intended targets.","PeriodicalId":87287,"journal":{"name":"IUI. International Conference on Intelligent User Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2012-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86377746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 99
期刊
IUI. International Conference on Intelligent User Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1