首页 > 最新文献

Proceedings of the 10th international conference on Intelligent user interfaces最新文献

英文 中文
Extraction and classification of facemarks 人脸标记的提取与分类
Pub Date : 2005-01-10 DOI: 10.1145/1040830.1040847
Yuki Tanaka, Hiroya Takamura, M. Okumura
We propose methods for extracting facemarks (emoticons) in text and classifying them into some emotional categories. In text-based communication, facemarks have gained popularity, since they help us understand what writers imply. However, there are two problems in text-based communication using facemarks; the first is the variety of facemarks and the second is lack of good comprehension in using facemarks. These problems are more serious in the areas where 2-byte characters are used, because the 2-byte characters can generate a quite large number of different facemarks. Therefore, we are going to propose methods for extraction and classification of facemarks. Regarding the extraction of facemarks as a chunking task, we automatically annotate a tag to each character in text. In the classification of the extracted facemarks, we apply the dynamic time alignment kernel (DTAK) and the string subsequence kernel (SSK) for scoring in the k-nearest neighbor (k-NN) method and for expanding usual Support Vector Machines (SVMs) to accept sequential data such as facemarks. We empirically show that our methods work well in classification and extraction of facemarks, with appropriate settings of parameters.
我们提出了从文本中提取表情符号(emoticon)并将其分类到一些情感类别的方法。在以文本为基础的交流中,“脸书”越来越受欢迎,因为它们可以帮助我们理解作者的意思。然而,在基于文本的交流中使用脸标存在两个问题;首先是facemark的种类繁多,其次是在使用facemark时缺乏很好的理解。这些问题在使用2字节字符的领域更为严重,因为2字节字符可以生成大量不同的facemark。因此,我们将提出人脸标记的提取和分类方法。将人脸标记的提取作为分块任务,我们自动为文本中的每个字符标注一个标记。在对提取的人脸标记进行分类时,我们应用动态时间对齐核(DTAK)和字符串子序列核(SSK)在k-最近邻(k-NN)方法中进行评分,并扩展通常的支持向量机(svm)来接受人脸标记等序列数据。我们的经验表明,在适当设置参数的情况下,我们的方法可以很好地分类和提取人脸标记。
{"title":"Extraction and classification of facemarks","authors":"Yuki Tanaka, Hiroya Takamura, M. Okumura","doi":"10.1145/1040830.1040847","DOIUrl":"https://doi.org/10.1145/1040830.1040847","url":null,"abstract":"We propose methods for extracting facemarks (emoticons) in text and classifying them into some emotional categories. In text-based communication, facemarks have gained popularity, since they help us understand what writers imply. However, there are two problems in text-based communication using facemarks; the first is the variety of facemarks and the second is lack of good comprehension in using facemarks. These problems are more serious in the areas where 2-byte characters are used, because the 2-byte characters can generate a quite large number of different facemarks. Therefore, we are going to propose methods for extraction and classification of facemarks. Regarding the extraction of facemarks as a chunking task, we automatically annotate a tag to each character in text. In the classification of the extracted facemarks, we apply the dynamic time alignment kernel (DTAK) and the string subsequence kernel (SSK) for scoring in the k-nearest neighbor (k-NN) method and for expanding usual Support Vector Machines (SVMs) to accept sequential data such as facemarks. We empirically show that our methods work well in classification and extraction of facemarks, with appropriate settings of parameters.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134552304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Affective interactions: the computer in the affective loop 情感互动:情感循环中的计算机
Pub Date : 2005-01-10 DOI: 10.1145/1040830.1040838
C. Conati, S. Marsella, Ana Paiva
There has been an increasing interest in exploring how recognition of a user's affective state can be exploited in creating more effective human-computer interaction. It has been argued that IUIs may be able to improve interaction by including affective elements in their communication with the user (e.g. by showing empathy via adequate phrasing of feedback.) This workshop will address a variety of issues related to the development of what we will call the affective loop: detection/modeling of relevant user's states, selection of appropriate system responses (including responses that are designed to influence the user affective state but are not overtly affective), as well as synthesis of the appropriate affective expressions.
人们对如何利用对用户情感状态的识别来创造更有效的人机交互越来越感兴趣。有人认为,iui可以通过在与用户的交流中包含情感元素来改善交互(例如,通过适当的反馈措辞来表现同理心)。本次研讨会将讨论与我们称之为情感循环的发展相关的各种问题:相关用户状态的检测/建模,适当系统响应的选择(包括旨在影响用户情感状态但不明显情感的响应),以及适当情感表达的合成。
{"title":"Affective interactions: the computer in the affective loop","authors":"C. Conati, S. Marsella, Ana Paiva","doi":"10.1145/1040830.1040838","DOIUrl":"https://doi.org/10.1145/1040830.1040838","url":null,"abstract":"There has been an increasing interest in exploring how recognition of a user's affective state can be exploited in creating more effective human-computer interaction. It has been argued that IUIs may be able to improve interaction by including affective elements in their communication with the user (e.g. by showing empathy via adequate phrasing of feedback.) This workshop will address a variety of issues related to the development of what we will call the affective loop: detection/modeling of relevant user's states, selection of appropriate system responses (including responses that are designed to influence the user affective state but are not overtly affective), as well as synthesis of the appropriate affective expressions.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"178 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122081755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Intelligent interfaces for preference-based search 基于偏好搜索的智能接口
Pub Date : 2005-01-10 DOI: 10.1145/1040830.1040842
P. Pu, B. Faltings
Preference-based search, defined as finding the most preferred item in a large collection, is becoming an increasingly important subject in computer science with many applications: multi-attribute product search, constraint-based plan optimization, configuration design, and recommendation systems. Decision theory formalizes what the most preferred item is and how it can be identified. In recent years, decision theory has pointed out discrepancies between the normative models of how people should reason and empirical studies of how they in fact think and decide. However, many search tools are still based on the normative model, thus ignoring some of the fundamental cognitive aspects of human decision making. Consequently these search tools do not find accurate results for users. This tutorial starts by giving an overview of recent literature in decision theory, and explaining the differences between descriptive, and normative approaches. It then describes some of the principles derived from behavior decision theory and how they can be turned into principles for developing intelligent user interfaces to help users to make better choices while searching. It develops in particular the issues of how to model user preferences with a limited interaction effort, how to support tradeoff, and how to implement practical search tools using the principles.
基于偏好的搜索,定义为在大量集合中找到最受欢迎的项目,是计算机科学中越来越重要的主题,有许多应用:多属性产品搜索,基于约束的计划优化,配置设计和推荐系统。决策理论形式化了什么是最受欢迎的项目以及如何识别它。近年来,决策理论指出了人们应该如何推理的规范模型与人们实际上如何思考和决策的实证研究之间的差异。然而,许多搜索工具仍然基于规范模型,从而忽略了人类决策的一些基本认知方面。因此,这些搜索工具不能为用户找到准确的结果。本教程首先概述了决策理论的最新文献,并解释了描述性和规范性方法之间的差异。然后描述了从行为决策理论中衍生出来的一些原则,以及如何将它们转化为开发智能用户界面的原则,以帮助用户在搜索时做出更好的选择。它特别讨论了如何在有限的交互努力下对用户偏好建模,如何支持权衡,以及如何使用这些原则实现实用的搜索工具等问题。
{"title":"Intelligent interfaces for preference-based search","authors":"P. Pu, B. Faltings","doi":"10.1145/1040830.1040842","DOIUrl":"https://doi.org/10.1145/1040830.1040842","url":null,"abstract":"Preference-based search, defined as finding the most preferred item in a large collection, is becoming an increasingly important subject in computer science with many applications: multi-attribute product search, constraint-based plan optimization, configuration design, and recommendation systems. Decision theory formalizes what the most preferred item is and how it can be identified. In recent years, decision theory has pointed out discrepancies between the normative models of how people should reason and empirical studies of how they in fact think and decide. However, many search tools are still based on the normative model, thus ignoring some of the fundamental cognitive aspects of human decision making. Consequently these search tools do not find accurate results for users. This tutorial starts by giving an overview of recent literature in decision theory, and explaining the differences between descriptive, and normative approaches. It then describes some of the principles derived from behavior decision theory and how they can be turned into principles for developing intelligent user interfaces to help users to make better choices while searching. It develops in particular the issues of how to model user preferences with a limited interaction effort, how to support tradeoff, and how to implement practical search tools using the principles.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129406838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Vision based GUI for interactive mobile robots 基于视觉的交互式移动机器人GUI
Pub Date : 2005-01-10 DOI: 10.1145/1040830.1040887
Randeep Singh, B. Seth, U. Desai
Interactive mobile robots are an active area of research. This paper presents a framework for designing a real-time vision based hand-body gesture user interface for such robots. The said framework works in real world lighting conditions, with complex background, and can handle intermittent motion of the camera. The input signal is captured by using a singular monocular color camera. Vision is the only feedback sensor being used. It is assumed that the gesturer is wearing clothes that are slightly different from the background. We have tested this framework on a gesture database consisting of 11 hand-body gestures and have recorded recognition accuracy up to 90%.
交互式移动机器人是一个活跃的研究领域。本文提出了一种基于实时视觉的手-身手势用户界面设计框架。该框架适用于现实世界的照明条件,具有复杂的背景,并且可以处理相机的间歇性运动。输入信号由单目彩色摄像机捕获。视觉是唯一使用的反馈传感器。假设做手势的人穿着与背景略有不同的衣服。我们在包含11个手-身手势的手势数据库上测试了该框架,并记录了高达90%的识别准确率。
{"title":"Vision based GUI for interactive mobile robots","authors":"Randeep Singh, B. Seth, U. Desai","doi":"10.1145/1040830.1040887","DOIUrl":"https://doi.org/10.1145/1040830.1040887","url":null,"abstract":"Interactive mobile robots are an active area of research. This paper presents a framework for designing a real-time vision based hand-body gesture user interface for such robots. The said framework works in real world lighting conditions, with complex background, and can handle intermittent motion of the camera. The input signal is captured by using a singular monocular color camera. Vision is the only feedback sensor being used. It is assumed that the gesturer is wearing clothes that are slightly different from the background. We have tested this framework on a gesture database consisting of 11 hand-body gestures and have recorded recognition accuracy up to 90%.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130148360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Two-way adaptation for robust input interpretation in practical multimodal conversation systems 多模态会话系统中鲁棒输入解释的双向自适应
Pub Date : 2005-01-10 DOI: 10.1145/1040830.1040849
Shimei Pan, Siwei Shen, Michelle X. Zhou, K. Houck
Multimodal conversation systems allow users to interact with computers effectively using multiple modalities, such as natural language and gesture. However, these systems have not been widely used in practical applications mainly due to their limited input understanding capability. As a result, conversation systems often fail to understand user requests and leave users frustrated. To address this issue, most existing approaches focus on improving a system's interpretation capability. Nonetheless, such improvements may still be limited, since they would never cover the entire range of input expressions. Alternatively, we present a two-way adaptation framework that allows both users and systems to dynamically adapt to each other's capability and needs during the course of interaction. Compared to existing methods, our approach offers two unique contributions. First, it improves the usability and robustness of a conversation system by helping users to dynamically learn the system's capabilities in context. Second, our approach enhances the overall interpretation capability of a conversation system by learning new user expressions on the fly. Our preliminary evaluation shows the promise of this approach.
多模式对话系统允许用户使用多种模式,如自然语言和手势,有效地与计算机进行交互。然而,由于输入理解能力有限,这些系统并没有在实际应用中得到广泛应用。因此,对话系统常常不能理解用户的请求,让用户感到沮丧。为了解决这个问题,大多数现有的方法都关注于改进系统的解释能力。尽管如此,这种改进可能仍然是有限的,因为它们永远不会覆盖整个输入表达式的范围。另外,我们提出了一个双向适应框架,允许用户和系统在交互过程中动态地适应彼此的能力和需求。与现有方法相比,我们的方法提供了两个独特的贡献。首先,它通过帮助用户动态地了解系统在上下文中的功能,提高了会话系统的可用性和健壮性。其次,我们的方法通过动态学习新的用户表达来增强会话系统的整体解释能力。我们的初步评估显示了这种方法的前景。
{"title":"Two-way adaptation for robust input interpretation in practical multimodal conversation systems","authors":"Shimei Pan, Siwei Shen, Michelle X. Zhou, K. Houck","doi":"10.1145/1040830.1040849","DOIUrl":"https://doi.org/10.1145/1040830.1040849","url":null,"abstract":"Multimodal conversation systems allow users to interact with computers effectively using multiple modalities, such as natural language and gesture. However, these systems have not been widely used in practical applications mainly due to their limited input understanding capability. As a result, conversation systems often fail to understand user requests and leave users frustrated. To address this issue, most existing approaches focus on improving a system's interpretation capability. Nonetheless, such improvements may still be limited, since they would never cover the entire range of input expressions. Alternatively, we present a two-way adaptation framework that allows both users and systems to dynamically adapt to each other's capability and needs during the course of interaction. Compared to existing methods, our approach offers two unique contributions. First, it improves the usability and robustness of a conversation system by helping users to dynamically learn the system's capabilities in context. Second, our approach enhances the overall interpretation capability of a conversation system by learning new user expressions on the fly. Our preliminary evaluation shows the promise of this approach.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125963192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Preliminary design guidelines for pedagogical agent interface image 教学代理界面图像初步设计导则
Pub Date : 2005-01-10 DOI: 10.1145/1040830.1040884
A. L. Baylor
Pedagogical agent image is a key feature for animated interface agents. Experimental research indicates that agent interface images should be carefully designed, considering both the relevant outcomes (learning or motivational) together with student characteristics. This paper summarizes empirically-derived design guidelines for pedagogical agent image.
教学代理图像是动画界面代理的一个关键特征。实验研究表明,智能体界面图像的设计应该谨慎,既要考虑相关结果(学习或动机),也要考虑学生的特点。本文总结了经验推导的教学代理形象设计准则。
{"title":"Preliminary design guidelines for pedagogical agent interface image","authors":"A. L. Baylor","doi":"10.1145/1040830.1040884","DOIUrl":"https://doi.org/10.1145/1040830.1040884","url":null,"abstract":"Pedagogical agent image is a key feature for animated interface agents. Experimental research indicates that agent interface images should be carefully designed, considering both the relevant outcomes (learning or motivational) together with student characteristics. This paper summarizes empirically-derived design guidelines for pedagogical agent image.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125485718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Task learning by instruction in tailor 教学任务学习
Pub Date : 2005-01-10 DOI: 10.1145/1040830.1040874
J. Blythe
In order for intelligent systems to be applicable in a wide range of situations, end users must be able to modify their task descriptions. We introduce Tailor, a system that allows users to modify task information through instruction. In this approach, the user enters a short sentence to describe the desired change. The system maps the sentence into valid, plausible modifications and checks for unexpected side-effects they may have, working interactively with the user throughout the process. We conducted preliminary tests in which subjects used Tailor to make modifications to domains drawn from the eHow website, applying modifications posted by readers as 'tips'. In this way the subjects acted as interpreters between Tailor and the human-generated descriptions of modifications. Almost all the subjects were able to make all modifications to the process descriptions with Tailor, indicating that the interpreter role is quite natural for users.
为了使智能系统适用于广泛的情况,最终用户必须能够修改他们的任务描述。我们介绍Tailor,一个允许用户通过指令修改任务信息的系统。在这种方法中,用户输入一个简短的句子来描述所需的更改。系统将句子映射为有效的、合理的修改,并检查它们可能产生的意想不到的副作用,在整个过程中与用户交互工作。我们进行了初步测试,在测试中,受试者使用Tailor对eHow网站上的域名进行修改,并将读者发布的修改作为“提示”。通过这种方式,受试者充当了裁缝和人类生成的修改描述之间的解释者。几乎所有的受试者都能够使用Tailor对过程描述进行所有修改,这表明解释器角色对用户来说是非常自然的。
{"title":"Task learning by instruction in tailor","authors":"J. Blythe","doi":"10.1145/1040830.1040874","DOIUrl":"https://doi.org/10.1145/1040830.1040874","url":null,"abstract":"In order for intelligent systems to be applicable in a wide range of situations, end users must be able to modify their task descriptions. We introduce Tailor, a system that allows users to modify task information through instruction. In this approach, the user enters a short sentence to describe the desired change. The system maps the sentence into valid, plausible modifications and checks for unexpected side-effects they may have, working interactively with the user throughout the process. We conducted preliminary tests in which subjects used Tailor to make modifications to domains drawn from the eHow website, applying modifications posted by readers as 'tips'. In this way the subjects acted as interpreters between Tailor and the human-generated descriptions of modifications. Almost all the subjects were able to make all modifications to the process descriptions with Tailor, indicating that the interpreter role is quite natural for users.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124027287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Multimodal new vocabulary recognition through speech and handwriting in a whiteboard scheduling application 多模态新词汇识别通过语音和手写在白板调度应用程序
Pub Date : 2005-01-10 DOI: 10.1145/1040830.1040851
E. Kaiser
Our goal is to automatically recognize and enroll new vocabulary in a multimodal interface. To accomplish this our technique aims to leverage the mutually disambiguating aspects of co-referenced, co-temporal handwriting and speech. The co-referenced semantics are spatially and temporally determined by our multimodal interface for schedule chart creation. This paper motivates and describes our technique for recognizing out-of-vocabulary (OOV) terms and enrolling them dynamically in the system. We report results for the detection and segmentation of OOV words within a small multimodal test set. On the same test set we also report utterance, word and pronunciation level error rates both over individual input modes and multimodally. We show that combining information from handwriting and speech yields significantly better results than achievable by either mode alone.
我们的目标是在多模态界面中自动识别和登记新词汇表。为了实现这一目标,我们的技术旨在利用共同引用,共同时间的手写和语音的相互消歧方面。共同引用的语义在空间和时间上由我们的多模态界面决定,用于创建进度表。本文提出并描述了一种识别词汇外(OOV)术语并将其动态纳入系统的技术。我们在一个小的多模态测试集中报告了OOV词的检测和分割结果。在同一测试集上,我们还报告了单个输入模式和多模态输入模式下的话语、单词和发音水平的错误率。我们表明,将手写和语音信息结合起来比单独使用任何一种模式产生的结果要好得多。
{"title":"Multimodal new vocabulary recognition through speech and handwriting in a whiteboard scheduling application","authors":"E. Kaiser","doi":"10.1145/1040830.1040851","DOIUrl":"https://doi.org/10.1145/1040830.1040851","url":null,"abstract":"Our goal is to automatically recognize and enroll new vocabulary in a multimodal interface. To accomplish this our technique aims to leverage the mutually disambiguating aspects of co-referenced, co-temporal handwriting and speech. The co-referenced semantics are spatially and temporally determined by our multimodal interface for schedule chart creation. This paper motivates and describes our technique for recognizing out-of-vocabulary (OOV) terms and enrolling them dynamically in the system. We report results for the detection and segmentation of OOV words within a small multimodal test set. On the same test set we also report utterance, word and pronunciation level error rates both over individual input modes and multimodally. We show that combining information from handwriting and speech yields significantly better results than achievable by either mode alone.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124045698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Suggesting novel but related topics: towards context-based support for knowledge model extension 提出新颖但相关的主题:为知识模型扩展提供基于上下文的支持
Pub Date : 2005-01-10 DOI: 10.1145/1040830.1040876
Ana Gabriela Maguitman, David B. Leake, T. Reichherzer
Much intelligent user interfaces research addresses the problem of providing information relevant to a current user topic. However, little work addresses the complementary question of helping the user identify potential topics to explore next. In knowledge acquisition, this question is crucial to deciding how to extend previously-captured knowledge. This paper examines requirements for effective topic suggestion and presents a domain-independent topic-generation algorithm designed to generate candidate topics that are novel but related to the current context. The algorithm iteratively performs a cycle of topic formation, Web search for connected material, and context-based filtering. An experimental study shows that this approach significantly outperforms a baseline at developing new topics similar to those chosen by an expert for a hand-coded knowledge model.
许多智能用户界面研究解决了提供与当前用户主题相关的信息的问题。然而,很少有工作解决帮助用户确定下一步探索的潜在主题的补充问题。在知识获取中,这个问题对于决定如何扩展先前捕获的知识至关重要。本文研究了有效主题建议的要求,并提出了一种独立于领域的主题生成算法,该算法旨在生成新颖但与当前上下文相关的候选主题。该算法迭代地执行主题形成、连接材料的Web搜索和基于上下文的过滤的循环。一项实验研究表明,这种方法在开发新主题方面明显优于基线,类似于由专家为手工编码的知识模型选择的主题。
{"title":"Suggesting novel but related topics: towards context-based support for knowledge model extension","authors":"Ana Gabriela Maguitman, David B. Leake, T. Reichherzer","doi":"10.1145/1040830.1040876","DOIUrl":"https://doi.org/10.1145/1040830.1040876","url":null,"abstract":"Much intelligent user interfaces research addresses the problem of providing information relevant to a current user topic. However, little work addresses the complementary question of helping the user identify potential topics to explore next. In knowledge acquisition, this question is crucial to deciding how to extend previously-captured knowledge. This paper examines requirements for effective topic suggestion and presents a domain-independent topic-generation algorithm designed to generate candidate topics that are novel but related to the current context. The algorithm iteratively performs a cycle of topic formation, Web search for connected material, and context-based filtering. An experimental study shows that this approach significantly outperforms a baseline at developing new topics similar to those chosen by an expert for a hand-coded knowledge model.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126492605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Metafor: visualizing stories as code 元功能:将故事可视化为代码
Pub Date : 2005-01-10 DOI: 10.1145/1040830.1040908
Hugo Liu, H. Lieberman
Every program tells a story. Programming, then, is the art of constructing a story about the objects in the program and what they do in various situations. So-called programming languages, while easy for the computer to accurately convert into code, are, unfortunately, difficult for people to write and understand.We explore the idea of using descriptions in a natural language as a representation for programs. While we cannot yet convert arbitrary English to fully specified code, we can use a reasonably expressive subset of English as a visualization tool. Simple descriptions of program objects and their behavior generate scaffolding (underspecified) code fragments, that can be used as feedback for the designer. Roughly speaking, noun phrases can be interpreted as program objects; verbs can be functions, adjectives can be properties. A surprising amount of what we call programmatic semantics can be inferred from linguistic structure. We present a program editor, Metafor, that dynamically converts a user's stories into program code, and in a user study, participants found it useful as a brainstorming tool.
每个节目都有一个故事。因此,编程是一门构建关于程序中的对象及其在各种情况下的作用的故事的艺术。所谓的编程语言,虽然计算机很容易准确地转换成代码,但不幸的是,人们很难编写和理解。我们探索使用自然语言中的描述作为程序的表示的想法。虽然我们还不能将任意英语转换为完全指定的代码,但我们可以使用具有合理表达能力的英语子集作为可视化工具。对程序对象及其行为的简单描述可以生成脚手架(未指定的)代码片段,这些代码片段可以作为设计者的反馈。粗略地说,名词短语可以被解释为程序对象;动词可以是功能,形容词可以是属性。从语言结构中可以推断出大量我们称之为编程式语义的东西。我们展示了一个程序编辑器,Metafor,它可以动态地将用户的故事转换为程序代码,并且在用户研究中,参与者发现它作为头脑风暴工具很有用。
{"title":"Metafor: visualizing stories as code","authors":"Hugo Liu, H. Lieberman","doi":"10.1145/1040830.1040908","DOIUrl":"https://doi.org/10.1145/1040830.1040908","url":null,"abstract":"Every program tells a story. Programming, then, is the art of constructing a story about the objects in the program and what they do in various situations. So-called programming languages, while easy for the computer to accurately convert into code, are, unfortunately, difficult for people to write and understand.We explore the idea of using descriptions in a natural language as a representation for programs. While we cannot yet convert arbitrary English to fully specified code, we can use a reasonably expressive subset of English as a visualization tool. Simple descriptions of program objects and their behavior generate scaffolding (underspecified) code fragments, that can be used as feedback for the designer. Roughly speaking, noun phrases can be interpreted as program objects; verbs can be functions, adjectives can be properties. A surprising amount of what we call programmatic semantics can be inferred from linguistic structure. We present a program editor, Metafor, that dynamically converts a user's stories into program code, and in a user study, participants found it useful as a brainstorming tool.","PeriodicalId":376409,"journal":{"name":"Proceedings of the 10th international conference on Intelligent user interfaces","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132767747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 102
期刊
Proceedings of the 10th international conference on Intelligent user interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1