首页 > 最新文献

Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)最新文献

英文 中文
A method for generating document summary using field association knowledge and subjectively information 一种利用领域关联知识和主观信息生成文档摘要的方法
Abdunabi Ubul, E. Atlam, K. Morita, M. Fuketa, J. Aoe
In the recent years, with the expansion of the Internet there has been tremendous growth in the volume of electronic text documents available information on the Web, which making difficulty for users to locate efficiently needed information. To facilitate efficient searching for information, research to summarize the general outline of a text document is essential. Moreover, as the information from bulletin boards, blogs, and other sources is being used as consumer generated media data, text summarization become necessary. In this paper a new method for document summary using three attribute information called: the field, associated terms, and attribute grammars is presented, this method establish a formal and efficient generation technology. From the experiments results it turns out that the summary accuracy rate, readability, and meaning integrity are 87.5%, 85%, and 86%, respectively using information from 400 blogs.
近年来,随着互联网的发展,网络上的电子文本文档数量急剧增加,这给用户有效定位所需信息带来了困难。为了方便有效地搜索信息,研究总结文本文档的总体轮廓是必不可少的。此外,由于来自公告板、博客和其他来源的信息被用作消费者生成的媒体数据,文本摘要就变得必要了。本文提出了一种利用字段、关联术语和属性语法三种属性信息进行文档摘要的新方法,该方法建立了一种形式化、高效的生成技术。实验结果表明,使用400个博客的信息,摘要的准确率、可读性和意义完整性分别为87.5%、85%和86%。
{"title":"A method for generating document summary using field association knowledge and subjectively information","authors":"Abdunabi Ubul, E. Atlam, K. Morita, M. Fuketa, J. Aoe","doi":"10.1109/NLPKE.2010.5587853","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587853","url":null,"abstract":"In the recent years, with the expansion of the Internet there has been tremendous growth in the volume of electronic text documents available information on the Web, which making difficulty for users to locate efficiently needed information. To facilitate efficient searching for information, research to summarize the general outline of a text document is essential. Moreover, as the information from bulletin boards, blogs, and other sources is being used as consumer generated media data, text summarization become necessary. In this paper a new method for document summary using three attribute information called: the field, associated terms, and attribute grammars is presented, this method establish a formal and efficient generation technology. From the experiments results it turns out that the summary accuracy rate, readability, and meaning integrity are 87.5%, 85%, and 86%, respectively using information from 400 blogs.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122098779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-Document summarization based on improved features and clustering 基于改进特征和聚类的多文档摘要
Ying Xiong, Hongyan Liu, Lei Li
Multi-Document summarization is an emerging technique for understanding the main purpose of many documents about the same topic. This paper proposes a new feature selection method to improve the summarization result. When calculating similarity, we use a modified TFIDF formula which achieves a better result. We adopt two ways for exactly extracting keywords. Experimental results demonstrate that our improved method performs better than the traditional one.
多文档摘要是一种新兴的技术,用于理解关于同一主题的许多文档的主要目的。本文提出了一种新的特征选择方法来改善摘要结果。在计算相似度时,我们使用了一个改进的TFIDF公式,得到了更好的结果。我们采用两种方法精确提取关键词。实验结果表明,改进后的方法比传统方法具有更好的性能。
{"title":"Multi-Document summarization based on improved features and clustering","authors":"Ying Xiong, Hongyan Liu, Lei Li","doi":"10.1109/NLPKE.2010.5587834","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587834","url":null,"abstract":"Multi-Document summarization is an emerging technique for understanding the main purpose of many documents about the same topic. This paper proposes a new feature selection method to improve the summarization result. When calculating similarity, we use a modified TFIDF formula which achieves a better result. We adopt two ways for exactly extracting keywords. Experimental results demonstrate that our improved method performs better than the traditional one.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128687187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Designing effective web mining-based techniques for OOV translation 设计有效的基于web挖掘的面向对象语言翻译技术
Haitao Yu, F. Ren, Degen Huang, Lishuang Li
Due to a limited coverage of the existing bilingual dictionary, it is often difficult to translate the Out-Of-Vocabulary terms (OOV) in many natural language processing tasks. In this paper, we propose a general cascade mining technique of three steps, it leverages OOV category to optimize the effectiveness of each step. OOV category based expansion policy is suggested to get more relevant mixed-language documents. OOV category based hybrid extraction approach is suggested to perform a robust extraction. A more flexible model combination based on OOV category is also suggested. Moreover, we conducted experiments to evaluate the effectiveness of each step and the overall performance of the mining technique. The experimental results show significantly performance improvement than the existing methods.
由于现有双语词典的覆盖范围有限,在许多自然语言处理任务中,词汇外术语(OOV)的翻译往往很困难。在本文中,我们提出了一种通用的三级级联挖掘技术,它利用OOV类别来优化每一步的有效性。提出了基于OOV分类的扩展策略,以获得更多相关的混合语言文档。提出了基于OOV分类的混合提取方法,实现了鲁棒性提取。提出了一种基于面向对象分类的更灵活的模型组合方法。此外,我们还进行了实验,以评估每个步骤的有效性和采矿技术的整体性能。实验结果表明,与现有方法相比,该方法的性能有显著提高。
{"title":"Designing effective web mining-based techniques for OOV translation","authors":"Haitao Yu, F. Ren, Degen Huang, Lishuang Li","doi":"10.1109/NLPKE.2010.5587807","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587807","url":null,"abstract":"Due to a limited coverage of the existing bilingual dictionary, it is often difficult to translate the Out-Of-Vocabulary terms (OOV) in many natural language processing tasks. In this paper, we propose a general cascade mining technique of three steps, it leverages OOV category to optimize the effectiveness of each step. OOV category based expansion policy is suggested to get more relevant mixed-language documents. OOV category based hybrid extraction approach is suggested to perform a robust extraction. A more flexible model combination based on OOV category is also suggested. Moreover, we conducted experiments to evaluate the effectiveness of each step and the overall performance of the mining technique. The experimental results show significantly performance improvement than the existing methods.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115674388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Realization of a high performance bilingual OCR system for Thai-English printed documents 高性能泰英双语OCR系统的实现
S. Tangwongsan, Buntida Suvacharakulton
This paper presents a high performance bilingual OCR system for printed Thai and English text. With the complex nature of both Thai and English languages, the first stage is to identify languages within different zones by using geometric properties for differentiation. The second stage is the process of character recognition, in which the technique developed includes a feature extractor and a classifier. In the feature extraction, the thinned character image is analyzed and categorized into groups. Next, the classifier will take in two steps of recognition: the coarse level, followed by the fine level with a guide of decision trees. As to obtain an even better result, the final stage attempts to make use of dictionary look-up as to check for accuracy improvement in an overall performance. For verification, the system is tested by a series of experiments with printed documents in 141 pages and over 280,000 characters, the result shows that the system could obtain an accuracy of 100% in Thai monolingual, 98.18% in English monolingual, and 99.85% in bilingual documents on the average. In the final stage with a dictionary look-up, the system could yield a better accuracy of improvement up to 99.98% in bilingual documents as expected.
本文提出了一种高性能的泰语和英语文本双语OCR系统。由于泰语和英语语言的复杂性,第一阶段是通过使用几何属性来区分不同区域内的语言。第二阶段是字符识别过程,其中所开发的技术包括特征提取器和分类器。在特征提取中,对减薄后的字符图像进行分析和分组。接下来,分类器将分两个步骤进行识别:粗糙层,然后是在决策树的指导下进行精细层。为了获得更好的结果,最后阶段尝试使用字典查找来检查整体性能中的准确性改进。为了验证该系统的有效性,对141页28万字以上的打印文档进行了一系列实验,结果表明,该系统在泰语单语、英语单语和双语文档中的平均准确率分别达到100%、98.18%和99.85%。在最后阶段,通过词典查询,系统在双语文档中的准确率达到了预期的99.98%。
{"title":"Realization of a high performance bilingual OCR system for Thai-English printed documents","authors":"S. Tangwongsan, Buntida Suvacharakulton","doi":"10.1109/NLPKE.2010.5587781","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587781","url":null,"abstract":"This paper presents a high performance bilingual OCR system for printed Thai and English text. With the complex nature of both Thai and English languages, the first stage is to identify languages within different zones by using geometric properties for differentiation. The second stage is the process of character recognition, in which the technique developed includes a feature extractor and a classifier. In the feature extraction, the thinned character image is analyzed and categorized into groups. Next, the classifier will take in two steps of recognition: the coarse level, followed by the fine level with a guide of decision trees. As to obtain an even better result, the final stage attempts to make use of dictionary look-up as to check for accuracy improvement in an overall performance. For verification, the system is tested by a series of experiments with printed documents in 141 pages and over 280,000 characters, the result shows that the system could obtain an accuracy of 100% in Thai monolingual, 98.18% in English monolingual, and 99.85% in bilingual documents on the average. In the final stage with a dictionary look-up, the system could yield a better accuracy of improvement up to 99.98% in bilingual documents as expected.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121132894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automatic filtration of multiword units 自动过滤多字单位
Y. Liu, Zheng Tie
This paper studies how to filtrate multiword units. We use normalized expectation (NE) to extract multiword unit candidates from patent corpus. Then the multiword unit candidates are filtrated using stop words, frequency, first stop words, last stop words, and contextual entropy. The experimental result shows that the precision rate of multiword units is improved by 8.7% after filtration.
本文研究了如何过滤多字单元。我们使用归一化期望(NE)从专利语料库中提取多词候选单位。然后使用停止词、频率、第一个停止词、最后一个停止词和上下文熵对多词单元候选词进行过滤。实验结果表明,过滤后的多词单元准确率提高了8.7%。
{"title":"Automatic filtration of multiword units","authors":"Y. Liu, Zheng Tie","doi":"10.1109/NLPKE.2010.5587783","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587783","url":null,"abstract":"This paper studies how to filtrate multiword units. We use normalized expectation (NE) to extract multiword unit candidates from patent corpus. Then the multiword unit candidates are filtrated using stop words, frequency, first stop words, last stop words, and contextual entropy. The experimental result shows that the precision rate of multiword units is improved by 8.7% after filtration.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"261 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131807931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Document expansion using relevant web documents for spoken document retrieval 文档扩展使用相关的网络文档为口语文档检索
Ryo Masumura, A. Ito, Yu Uno, Masashi Ito, S. Makino
Recently, automatic indexing of a spoken document using a speech recognizer attracts attention. However, index generation from an automatic transcription has many problems because the automatic transcription has many recognition errors and Out-Of-Vocabulary words. To solve this problem, we propose a document expansion method using Web documents. To obtain important keywords which included in the spoken document but lost by recognition errors, we acquire Web documents relevant to the spoken document. Then, an index of the spoken document is generated by combining an index that generated from the automatic transcription and the Web documents. We propose a method for retrieval of relevant documents, and the experimental result shows that the retrieved Web document contained many OOV words. Next, we propose a method for combining the recognized index and the Web index. The experimental result shows that the index of the spoken document generated by the document expansion was closer to an index from the manual transcription than the index generated by the conventional method. Finally, we conducted a spoken document retrieval experiment, and the document-expansion-based index gave better retrieval precision than the conventional indexing method.
最近,利用语音识别器对语音文档进行自动索引引起了人们的关注。然而,由于自动转录存在许多识别错误和词汇外的问题,自动转录的索引生成存在许多问题。为了解决这个问题,我们提出了一种利用Web文档展开文档的方法。为了获得口语中包含但因识别错误而丢失的重要关键词,我们获取了与口语相关的Web文档。然后,通过组合从自动转录和Web文档生成的索引来生成口语文档的索引。我们提出了一种检索相关文档的方法,实验结果表明,检索到的Web文档包含了大量的面向对象词。接下来,我们提出了一种将识别索引与Web索引相结合的方法。实验结果表明,与传统方法生成的索引相比,该方法生成的口语文档索引更接近于人工抄写的索引。最后,我们进行了口语文档检索实验,结果表明基于文档扩展的索引比传统的索引方法具有更好的检索精度。
{"title":"Document expansion using relevant web documents for spoken document retrieval","authors":"Ryo Masumura, A. Ito, Yu Uno, Masashi Ito, S. Makino","doi":"10.1109/NLPKE.2010.5587854","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587854","url":null,"abstract":"Recently, automatic indexing of a spoken document using a speech recognizer attracts attention. However, index generation from an automatic transcription has many problems because the automatic transcription has many recognition errors and Out-Of-Vocabulary words. To solve this problem, we propose a document expansion method using Web documents. To obtain important keywords which included in the spoken document but lost by recognition errors, we acquire Web documents relevant to the spoken document. Then, an index of the spoken document is generated by combining an index that generated from the automatic transcription and the Web documents. We propose a method for retrieval of relevant documents, and the experimental result shows that the retrieved Web document contained many OOV words. Next, we propose a method for combining the recognized index and the Web index. The experimental result shows that the index of the spoken document generated by the document expansion was closer to an index from the manual transcription than the index generated by the conventional method. Finally, we conducted a spoken document retrieval experiment, and the document-expansion-based index gave better retrieval precision than the conventional indexing method.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"27 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121007971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Needs and challenges of care robots in nursing care setting: A literature review 护理机器人在护理环境中的需求与挑战:文献综述
Yuko Nagai, T. Tanioka, Shoko Fuji, Yuko Yasuhara, Sakiko Sakamaki, Narimi Taoka, R. Locsin, Fuji Ren, Kazuyuki Matsumoto
This study aims to identify needs and challenges of care robot in nursing care setting through an extensive search of the literature. As the result shows, there exists a shortage of information about results of the introduction of care robots, the needs of recipients and care providers, and relevant ethical problems. To advance our research and to introduce care robots into setting, there are so many things to do; consider the application of natural language processing technology by collaborating with researchers in the robotics field, carry out an investigation, extract the needs, clarify ethical problems and seek solutions, conduct the on-site experiment study, and so on.
本研究旨在确定需求和挑战的保健护理环境中机器人通过一个广泛的文献搜索。结果表明,关于引入护理机器人的结果,接受者和护理提供者的需求以及相关的伦理问题存在信息短缺。为了推进我们的研究并将护理机器人引入环境,有很多事情要做;通过与机器人领域的研究人员合作,考虑自然语言处理技术的应用,开展调查,提取需求,澄清伦理问题并寻求解决方案,进行现场实验研究等。
{"title":"Needs and challenges of care robots in nursing care setting: A literature review","authors":"Yuko Nagai, T. Tanioka, Shoko Fuji, Yuko Yasuhara, Sakiko Sakamaki, Narimi Taoka, R. Locsin, Fuji Ren, Kazuyuki Matsumoto","doi":"10.1109/NLPKE.2010.5587815","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587815","url":null,"abstract":"This study aims to identify needs and challenges of care robot in nursing care setting through an extensive search of the literature. As the result shows, there exists a shortage of information about results of the introduction of care robots, the needs of recipients and care providers, and relevant ethical problems. To advance our research and to introduce care robots into setting, there are so many things to do; consider the application of natural language processing technology by collaborating with researchers in the robotics field, carry out an investigation, extract the needs, clarify ethical problems and seek solutions, conduct the on-site experiment study, and so on.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132952033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A new cascade algorithm based on CRFs for recognizing Chinese verb-object collocation 基于CRFs的汉语动宾搭配级联识别新算法
Guiping Zhang, Zhichao Liu, Qiaoli Zhou, Dongfeng Cai, Jiao Cheng
This paper proposes a new cascade algorithm based on conditional random fields. The algorithm is applied to automatic recognition of Chinese verb-object collocation, and combined with a new sequence labeling of “ONIY”. Experiments compare identified results under two segmentations and part-of-speech tag sets. The comprehensive experimental results show that the best performance is 90.65 % in F-score over Tsinghua Treebank, and 82.00 % in F-score over the segmentation and part-of-speech tagging scheme of Peking University. Our experiments show that the proposed algorithm can greatly improve recognition accuracy of multi-nested collocation, and play a positive role on long distance collocation.
提出了一种新的基于条件随机场的级联算法。将该算法应用于汉语动宾搭配的自动识别中,并结合一种新的序列标注“only”。实验比较了两种分词和词性标签集下的识别结果。综合实验结果表明,该方法优于清华树库的f值为90.65%,优于北京大学的分词和词性标注方案的f值为82.00%。实验结果表明,该算法可以大大提高多嵌套搭配的识别精度,并在远距离搭配中发挥积极作用。
{"title":"A new cascade algorithm based on CRFs for recognizing Chinese verb-object collocation","authors":"Guiping Zhang, Zhichao Liu, Qiaoli Zhou, Dongfeng Cai, Jiao Cheng","doi":"10.1109/NLPKE.2010.5587828","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587828","url":null,"abstract":"This paper proposes a new cascade algorithm based on conditional random fields. The algorithm is applied to automatic recognition of Chinese verb-object collocation, and combined with a new sequence labeling of “ONIY”. Experiments compare identified results under two segmentations and part-of-speech tag sets. The comprehensive experimental results show that the best performance is 90.65 % in F-score over Tsinghua Treebank, and 82.00 % in F-score over the segmentation and part-of-speech tagging scheme of Peking University. Our experiments show that the proposed algorithm can greatly improve recognition accuracy of multi-nested collocation, and play a positive role on long distance collocation.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114551334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Negation disambiguation using the maximum entropy model 最大熵模型的否定消歧
Chunliang Zhang, Xiaoxu Fei, Jingbo Zhu
Handling negation issue is of great significance for sentiment analysis. Most previous studies adopted a simple heuristic rule for sentiment negation disambiguation within a fixed context window. In this paper we present a supervised method to disambiguate which sentiment word is attached to the negator such as “(not)” in an opinionated sentence. Experimental results show that our method can achieve better performance than traditional methods.
处理否定问题对情感分析具有重要意义。以往的研究大多采用简单的启发式规则,在固定的语境窗口内进行情绪否定消歧。本文提出了一种有监督的方法来判别在固执己见的句子中,哪个情感词附在否定词(如“(不)”)后面。实验结果表明,该方法比传统方法具有更好的性能。
{"title":"Negation disambiguation using the maximum entropy model","authors":"Chunliang Zhang, Xiaoxu Fei, Jingbo Zhu","doi":"10.1109/NLPKE.2010.5587857","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587857","url":null,"abstract":"Handling negation issue is of great significance for sentiment analysis. Most previous studies adopted a simple heuristic rule for sentiment negation disambiguation within a fixed context window. In this paper we present a supervised method to disambiguate which sentiment word is attached to the negator such as “(not)” in an opinionated sentence. Experimental results show that our method can achieve better performance than traditional methods.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117237956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Distributed training for Conditional Random Fields 条件随机场的分布式训练
Xiaojun Lin, Liang Zhao, Dianhai Yu, Xihong Wu
This paper proposes a novel distributed training method of Conditional Random Fields (CRFs) by utilizing the clusters built from commodity computers. The method employs Message Passing Interface (MPI) to deal with large-scale data in two steps. Firstly, the entire training data is divided into several small pieces, each of which can be handled by one node. Secondly, instead of adopting a root node to collect all features, a new criterion is used to split the whole feature set into non-overlapping subsets and ensure that each node maintains the global information of one feature subset. Experiments are carried out on the task of Chinese word segmentation (WS) with large scale data, and we observed significant reduction on both training time and space, while preserving the performance.
本文提出了一种新的条件随机场(CRFs)分布式训练方法,该方法利用商用计算机构建的聚类进行训练。该方法采用消息传递接口(Message Passing Interface, MPI)分两步处理大规模数据。首先,将整个训练数据分成几个小块,每个小块可以由一个节点处理。其次,不再采用根节点收集所有特征,而是采用新的准则将整个特征集分割成不重叠的子集,并保证每个节点保持一个特征子集的全局信息;对大规模数据的中文分词(WS)任务进行了实验,在保持性能的前提下,我们观察到训练时间和空间的显著减少。
{"title":"Distributed training for Conditional Random Fields","authors":"Xiaojun Lin, Liang Zhao, Dianhai Yu, Xihong Wu","doi":"10.1109/NLPKE.2010.5587803","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587803","url":null,"abstract":"This paper proposes a novel distributed training method of Conditional Random Fields (CRFs) by utilizing the clusters built from commodity computers. The method employs Message Passing Interface (MPI) to deal with large-scale data in two steps. Firstly, the entire training data is divided into several small pieces, each of which can be handled by one node. Secondly, instead of adopting a root node to collect all features, a new criterion is used to split the whole feature set into non-overlapping subsets and ensure that each node maintains the global information of one feature subset. Experiments are carried out on the task of Chinese word segmentation (WS) with large scale data, and we observed significant reduction on both training time and space, while preserving the performance.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123421571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1