首页 > 最新文献

Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)最新文献

英文 中文
Conversion between dependency structures and phrase structures using a head finder algorithm 使用头部查找算法在依赖结构和短语结构之间进行转换
Xinxin Li, Xuan Wang, Lin Yao
This paper proposes how to convert projective dependency structures into flat phrase structures with language-independent syntactic categories, and use a head finder algorithm to convert these phrase structures back into dependency structures. The head finder algorithm is implemented by a maximum entropy approach with constraint information. The converted phrase structures can be parsed using a hierarchical coarse-to-fine method with latent variables. Experimental results show that the approach finds 98.8% heads of all phrases, and our algorithm achieves state-of-the-art dependency parsing performance in English Treebank.
本文提出了如何将投影依存结构转换为具有语言无关句法范畴的平面短语结构,并使用头部查找器算法将这些短语结构转换回依赖结构。该算法采用约束信息的最大熵方法实现。转换后的短语结构可以使用带有潜在变量的从粗到精的分层方法进行解析。实验结果表明,该方法在所有短语中找到了98.8%的头,在英语树库中达到了最先进的依赖解析性能。
{"title":"Conversion between dependency structures and phrase structures using a head finder algorithm","authors":"Xinxin Li, Xuan Wang, Lin Yao","doi":"10.1109/NLPKE.2010.5587792","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587792","url":null,"abstract":"This paper proposes how to convert projective dependency structures into flat phrase structures with language-independent syntactic categories, and use a head finder algorithm to convert these phrase structures back into dependency structures. The head finder algorithm is implemented by a maximum entropy approach with constraint information. The converted phrase structures can be parsed using a hierarchical coarse-to-fine method with latent variables. Experimental results show that the approach finds 98.8% heads of all phrases, and our algorithm achieves state-of-the-art dependency parsing performance in English Treebank.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127272336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Application of Chinese sentiment categorization to digital products reviews 中文情感分类在数字产品评论中的应用
Hongying Zan, Kuizhong Kou, Jiale Tian
Sentiment categorization have been widely explored in many fields, such as government policy, information monitoring, product tracking, etc. This paper adopts k-NN, Naive Bayes and SVM classifiers to categorize sentiments contained in on-line Chinese reviews on digital products. Our experimental results show that combining the words and phrases with sentiment orientation as hybrid features, SWM classifier achieves an accuracy of 96,47%, which is words of all parts of speech as features.
情感分类在政府政策、信息监控、产品跟踪等领域得到了广泛的探索。本文采用k-NN、朴素贝叶斯和支持向量机分类器对数字产品中文在线评论中的情感进行分类。实验结果表明,将带有情感倾向的词和短语作为混合特征,SWM分类器在所有词性词作为特征的情况下,准确率达到96,47%。
{"title":"Application of Chinese sentiment categorization to digital products reviews","authors":"Hongying Zan, Kuizhong Kou, Jiale Tian","doi":"10.1109/NLPKE.2010.5587788","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587788","url":null,"abstract":"Sentiment categorization have been widely explored in many fields, such as government policy, information monitoring, product tracking, etc. This paper adopts k-NN, Naive Bayes and SVM classifiers to categorize sentiments contained in on-line Chinese reviews on digital products. Our experimental results show that combining the words and phrases with sentiment orientation as hybrid features, SWM classifier achieves an accuracy of 96,47%, which is words of all parts of speech as features.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127481383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Recognizing sentiment polarity in Chinese reviews based on topic sentiment sentences 基于主题情感句的汉语评论情感极性识别
Jiang Yang, Min Hou, Ning Wang
We present an approach to recognizing sentiment polarity in Chinese reviews based on topic sentiment sentences. Considering the features of Chinese reviews, we firstly identify the topic of a review using an n-gram matching approach. To extract candidate topic sentiment sentences, we compute the semantic similarity between a given sentence and the ascertained topic and meanwhile determine whether the sentence is subjective. A certain number of these sentences are then selected as representatives according to their semantic similarity value with relation to the topic. The average value of the representative topic sentiment sentences is calculated and taken as the sentiment polarity of a review. Experiment results show that the proposed method is feasible and can achieve relatively high precision.
提出了一种基于主题情感句的中文评论情感极性识别方法。考虑到中文评论的特点,我们首先使用n-gram匹配方法来识别评论的主题。为了提取候选主题情感句,我们计算给定句子与确定主题之间的语义相似度,同时判断句子是否主观。然后根据这些句子与主题的语义相似度值选择一定数量的句子作为代表。计算具有代表性的主题情感句的平均值,并将其作为评论的情感极性。实验结果表明,该方法是可行的,可以达到较高的精度。
{"title":"Recognizing sentiment polarity in Chinese reviews based on topic sentiment sentences","authors":"Jiang Yang, Min Hou, Ning Wang","doi":"10.1109/NLPKE.2010.5587863","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587863","url":null,"abstract":"We present an approach to recognizing sentiment polarity in Chinese reviews based on topic sentiment sentences. Considering the features of Chinese reviews, we firstly identify the topic of a review using an n-gram matching approach. To extract candidate topic sentiment sentences, we compute the semantic similarity between a given sentence and the ascertained topic and meanwhile determine whether the sentence is subjective. A certain number of these sentences are then selected as representatives according to their semantic similarity value with relation to the topic. The average value of the representative topic sentiment sentences is calculated and taken as the sentiment polarity of a review. Experiment results show that the proposed method is feasible and can achieve relatively high precision.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126734880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A new algorithm of fuzzy support vector machine based on niche 基于小生境的模糊支持向量机新算法
Ying Huang, Wei Li
A new algorithm of fuzzy support vector machine based on niche is presented in this paper. In this algorithm, through comparing samples niche with class niche, the method of simply using Euclidean distance to measure the relationship of samples and class in the traditional support vector machine is changed by using the minimum radius in class niche, and the disadvantages of traditional support vector machine, which are sensitive to noise and outliers, and poor performance of differentiation of valid samples are overcome. Experimental data show that compared with the traditional support vector machine which only uses the distance between the sample and the center of class, this new algorithm can improve the convergence speed, and thus greatly enhance the discrimination between valid samples and noise samples.
提出了一种基于小生境的模糊支持向量机算法。该算法通过对样本生态位与类生态位的比较,改变了传统支持向量机简单使用欧氏距离来度量样本与类关系的方法,采用类生态位的最小半径,克服了传统支持向量机对噪声和离群点敏感、有效样本区分性能差的缺点。实验数据表明,与传统支持向量机仅利用样本与类中心之间的距离相比,该算法可以提高收敛速度,从而大大增强有效样本与噪声样本的区分能力。
{"title":"A new algorithm of fuzzy support vector machine based on niche","authors":"Ying Huang, Wei Li","doi":"10.1109/NLPKE.2010.5587796","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587796","url":null,"abstract":"A new algorithm of fuzzy support vector machine based on niche is presented in this paper. In this algorithm, through comparing samples niche with class niche, the method of simply using Euclidean distance to measure the relationship of samples and class in the traditional support vector machine is changed by using the minimum radius in class niche, and the disadvantages of traditional support vector machine, which are sensitive to noise and outliers, and poor performance of differentiation of valid samples are overcome. Experimental data show that compared with the traditional support vector machine which only uses the distance between the sample and the center of class, this new algorithm can improve the convergence speed, and thus greatly enhance the discrimination between valid samples and noise samples.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128718342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Term recognition using Conditional Random fields 使用条件随机场的术语识别
Xing Zhang, Yan Song, A. Fang
A machine learning framework, Conditional Random fields (CRF), is constructed in this study, which exploits syntactic information to recognize biomedical terms. Features used in this CRF framework focus on syntactic information in different levels, including parent nodes, syntactic functions, syntactic paths and term ratios. A series of experiments have been done to study the effects of training sizes, general term recognition and novel term recognition. The experiment results show that features as syntactic paths and term ratios can achieve good precision of term recognition, including both general terms and novel terms. However, the recall of novel term recognition is still unsatisfactory, which calls for more effective features to be used. All in all, as this research studies in depth the uses of some unique syntactic features, it is innovative in respect of constructing machine learning based term recognition system.
本研究构建了一个机器学习框架条件随机场(Conditional Random fields, CRF),该框架利用句法信息来识别生物医学术语。该CRF框架中使用的功能侧重于不同层次的语法信息,包括父节点、语法函数、语法路径和词比。通过一系列实验研究了训练大小、通用术语识别和新术语识别的影响。实验结果表明,句法路径特征和词项比例特征都能达到较好的词项识别精度,既包括通用词项,也包括新词项。然而,新语项识别的召回率仍然不理想,这需要使用更有效的特征。总而言之,由于本研究深入研究了一些独特的句法特征的使用,因此在构建基于机器学习的术语识别系统方面具有创新性。
{"title":"Term recognition using Conditional Random fields","authors":"Xing Zhang, Yan Song, A. Fang","doi":"10.1109/NLPKE.2010.5587809","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587809","url":null,"abstract":"A machine learning framework, Conditional Random fields (CRF), is constructed in this study, which exploits syntactic information to recognize biomedical terms. Features used in this CRF framework focus on syntactic information in different levels, including parent nodes, syntactic functions, syntactic paths and term ratios. A series of experiments have been done to study the effects of training sizes, general term recognition and novel term recognition. The experiment results show that features as syntactic paths and term ratios can achieve good precision of term recognition, including both general terms and novel terms. However, the recall of novel term recognition is still unsatisfactory, which calls for more effective features to be used. All in all, as this research studies in depth the uses of some unique syntactic features, it is innovative in respect of constructing machine learning based term recognition system.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116763361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Translation evaluation without reference based on user behavior model 基于用户行为模型的无参考翻译评价
Guiping Zhang, Ying Sun, Baosheng Yin, Na Ye
How to evaluate the translation of a machine translation system is a very important research topic. The traditional method of translation evaluation without reference is used to evaluate the translation from the linguistic characteristics mostly. In this paper, the user cost of post-editing operation is considered, and a new method of evaluation translation based on user behavior model is proposed. First of all, track and record the process from the post-editing of machine translation to the forming of the final translation, and extract the decision knowledge of a user behavior; then use the knowledge as an indicator of translation evaluation, and evaluate the machine translation by combining with a language model. Experimental results show that in the absence of reference, the present method is much better than the method of using linguistic characteristics only, and the present method is close to BLEU method with one reference in Spearmen's rank order correlation coefficient with human evaluation.
如何评价一个机器翻译系统的翻译效果是一个非常重要的研究课题。传统的无参考的翻译评价方法多是从语言特征出发对翻译进行评价。本文考虑了后期编辑操作的用户成本,提出了一种基于用户行为模型的评价翻译的新方法。首先,跟踪记录机器翻译从后期编辑到最终译文形成的过程,提取用户行为的决策知识;然后将这些知识作为翻译评价的指标,结合语言模型对机器翻译进行评价。实验结果表明,在没有参考文献的情况下,本文方法比只使用语言特征的方法效果好得多,并且在Spearmen的阶序与人类评价的相关系数方面,本文方法接近有参考文献的BLEU方法。
{"title":"Translation evaluation without reference based on user behavior model","authors":"Guiping Zhang, Ying Sun, Baosheng Yin, Na Ye","doi":"10.1109/NLPKE.2010.5587818","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587818","url":null,"abstract":"How to evaluate the translation of a machine translation system is a very important research topic. The traditional method of translation evaluation without reference is used to evaluate the translation from the linguistic characteristics mostly. In this paper, the user cost of post-editing operation is considered, and a new method of evaluation translation based on user behavior model is proposed. First of all, track and record the process from the post-editing of machine translation to the forming of the final translation, and extract the decision knowledge of a user behavior; then use the knowledge as an indicator of translation evaluation, and evaluate the machine translation by combining with a language model. Experimental results show that in the absence of reference, the present method is much better than the method of using linguistic characteristics only, and the present method is close to BLEU method with one reference in Spearmen's rank order correlation coefficient with human evaluation.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134243175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards grammar checker development for Persian language 对波斯语语法检查器发展的探讨
N. Ehsan, Heshaam Faili
With improvements in industry and information technology, large volumes of electronic texts such as newspapers, emails, weblogs, books and thesis are produced daily. Producing electrical documents has considerable benefits such as easy organizing and data management. Therefore, existence of automatic systems such as spell and grammar checker/corrector can help in reducing costs and increasing the electronic texts and it will improve the quality of electronic texts. You can input your text and the computer program will point out to you the spelling errors. It may also help with your grammar. Grammatical errors are described as wrong relation between words like subject-verb disagreement or wrong sequence of words like using plural noun where a single noun is needed. Grammar checking phase starts after spell checking is finished. This paper briefly describes the concepts and definition of grammar checkers in general followed by developing the first Persian (Farsi) grammar checker leading to an overview of the error types of Persian language. The proposed system detects and corrects about 20 frequent Persian grammar errors and tested on a sample dataset, retrieved about 70% and 83% accuracy respect to precision and recall metrics.
随着工业和信息技术的进步,每天都会产生大量的电子文本,如报纸、电子邮件、博客、书籍和论文。生成电子文档具有相当大的好处,例如易于组织和数据管理。因此,拼写和语法检查/校正等自动化系统的存在有助于降低成本和增加电子文本,并将提高电子文本的质量。你可以输入文本,电脑程序会指出拼写错误。这对你的语法也有帮助。语法错误被描述为单词之间的错误关系,如主谓不一致或单词顺序错误,如在需要单个名词时使用复数名词。语法检查阶段在拼写检查完成后开始。本文简要介绍了语法检查器的概念和定义,然后开发了第一个波斯语(波斯语)语法检查器,从而概述了波斯语的错误类型。该系统检测并纠正了大约20个常见的波斯语语法错误,并在样本数据集上进行了测试,在精度和召回率指标方面检索了大约70%和83%的准确率。
{"title":"Towards grammar checker development for Persian language","authors":"N. Ehsan, Heshaam Faili","doi":"10.1109/NLPKE.2010.5587839","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587839","url":null,"abstract":"With improvements in industry and information technology, large volumes of electronic texts such as newspapers, emails, weblogs, books and thesis are produced daily. Producing electrical documents has considerable benefits such as easy organizing and data management. Therefore, existence of automatic systems such as spell and grammar checker/corrector can help in reducing costs and increasing the electronic texts and it will improve the quality of electronic texts. You can input your text and the computer program will point out to you the spelling errors. It may also help with your grammar. Grammatical errors are described as wrong relation between words like subject-verb disagreement or wrong sequence of words like using plural noun where a single noun is needed. Grammar checking phase starts after spell checking is finished. This paper briefly describes the concepts and definition of grammar checkers in general followed by developing the first Persian (Farsi) grammar checker leading to an overview of the error types of Persian language. The proposed system detects and corrects about 20 frequent Persian grammar errors and tested on a sample dataset, retrieved about 70% and 83% accuracy respect to precision and recall metrics.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134301675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Emotion analysis in blogs at sentence level using a Chinese emotion corpus 基于中文情感语料库的博客句子情感分析
Changqin Quan, Tingting He, F. Ren
Previous researches for emotional analysis of texts have included a variety of text contents: weblogs, stories, news, text messages, spoken dialogs, and so on. Compared with other text styles, the main characteristics of emotional expressions in blogs are as follows: (1) Highly personal, subjective writing style; (2) New words and expressions are constantly emerging; (3) The integrity and continuity of using language. Using a Chinese emotion corpus (Ren-CECps), in this study, we make an analysis on emotion expressions in blogs at sentence level. Firstly, we separate the sentences into two classes: simple sentences (sentences without negative words, conjunctions, or question mark) and complex sentences (sentences with negative words, conjunctions, or question mark). Then we compare the two classes of sentence on sentence emotion recognition based on emotional words. Furthermore we analysis the following factors for emotion change at sentence level: negative words, conjunctions, punctuation marks, and contextual emotions. At last, we make an hypothesis that the emotional focus of a sentence could be expressed by a certain clause in this sentence, and the experimental results have proved this hypothesis, which showed that selecting the clauses containing emotional focus of a sentence correctly would be helpful to recognize sentence emotions.
以往对文本情感分析的研究涵盖了多种文本内容:博客、故事、新闻、短信、口语对话等。与其他文本风格相比,博客情感表达的主要特点是:(1)高度个人化、主观性强的写作风格;(2)新词、短语不断涌现;(3)使用语言的完整性和连续性。本研究使用汉语情感语料库(Ren-CECps),从句子层面对博客中的情感表达进行分析。首先,我们将句子分为两类:简单句(不含否定词、连词或问号的句子)和复合句(含否定词、连词或问号的句子)。然后在基于情感词的句子情感识别上对两类句子进行了比较。在此基础上,我们进一步分析了否定词、连词、标点符号和语境情绪对句子情感变化的影响因素。最后,我们提出了一个假设,即一个句子的情感焦点可以通过这个句子中的某个分句来表达,实验结果证明了这一假设,这表明正确选择一个句子中包含情感焦点的分句有助于识别句子的情感。
{"title":"Emotion analysis in blogs at sentence level using a Chinese emotion corpus","authors":"Changqin Quan, Tingting He, F. Ren","doi":"10.1109/NLPKE.2010.5587790","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587790","url":null,"abstract":"Previous researches for emotional analysis of texts have included a variety of text contents: weblogs, stories, news, text messages, spoken dialogs, and so on. Compared with other text styles, the main characteristics of emotional expressions in blogs are as follows: (1) Highly personal, subjective writing style; (2) New words and expressions are constantly emerging; (3) The integrity and continuity of using language. Using a Chinese emotion corpus (Ren-CECps), in this study, we make an analysis on emotion expressions in blogs at sentence level. Firstly, we separate the sentences into two classes: simple sentences (sentences without negative words, conjunctions, or question mark) and complex sentences (sentences with negative words, conjunctions, or question mark). Then we compare the two classes of sentence on sentence emotion recognition based on emotional words. Furthermore we analysis the following factors for emotion change at sentence level: negative words, conjunctions, punctuation marks, and contextual emotions. At last, we make an hypothesis that the emotional focus of a sentence could be expressed by a certain clause in this sentence, and the experimental results have proved this hypothesis, which showed that selecting the clauses containing emotional focus of a sentence correctly would be helpful to recognize sentence emotions.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124393457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Tagging online service reviews 标记在线服务评论
Suke Li, Jinmei Hao, Zhong Chen
This paper proposes a tagging method that can highlight important service aspects for users who browse online service reviews. Experiments on service aspect ranking and review tagging show that the proposed method is effective for finding important aspects and can generate useful and interesting tags for reviews.
本文提出了一种标记方法,可以为浏览在线服务评论的用户突出重要的服务方面。服务方面排序和评论标记实验表明,该方法能够有效地找到重要方面,并能生成有用和有趣的评论标签。
{"title":"Tagging online service reviews","authors":"Suke Li, Jinmei Hao, Zhong Chen","doi":"10.1109/NLPKE.2010.5587816","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587816","url":null,"abstract":"This paper proposes a tagging method that can highlight important service aspects for users who browse online service reviews. Experiments on service aspect ranking and review tagging show that the proposed method is effective for finding important aspects and can generate useful and interesting tags for reviews.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124906707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constraint Soup 约束汤
R. Niemeijer, B. Vries, J. Beetz
To facilitate mass customization in the building industry, an automated method is needed to check the validity of user-created designs. This check requires that numerous complex building codes and regulations, as well as architects' demands are formalized and captured by non-programming domain experts. This can be done via a natural language interface, as it reduces the required amount of training of users. In this paper we describe an algorithm for interpreting architectural constraints in such a system.
为了促进建筑行业的大规模定制,需要一种自动化的方法来检查用户创建的设计的有效性。这种检查需要许多复杂的建筑规范和规则,以及架构师的需求被非编程领域专家形式化并捕获。这可以通过自然语言界面完成,因为它减少了用户所需的培训量。在本文中,我们描述了一种解释这种系统中的架构约束的算法。
{"title":"Constraint Soup","authors":"R. Niemeijer, B. Vries, J. Beetz","doi":"10.1109/NLPKE.2010.5587851","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587851","url":null,"abstract":"To facilitate mass customization in the building industry, an automated method is needed to check the validity of user-created designs. This check requires that numerous complex building codes and regulations, as well as architects' demands are formalized and captured by non-programming domain experts. This can be done via a natural language interface, as it reduces the required amount of training of users. In this paper we describe an algorithm for interpreting architectural constraints in such a system.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132521102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1