首页 > 最新文献

Int. J. Comput. Linguistics Chin. Lang. Process.最新文献

英文 中文
Modeling Taiwanese POS Tagging Using Statistical Methods and Mandarin Training Data 基于统计方法和普通话训练数据的台湾词性标注建模
Pub Date : 2009-09-01 DOI: 10.30019/IJCLCLP.200909.0001
Un-Gian Iunn, Jia-hung Tai, K. Lau, Cheng-Yan Kao, Keh-Jiann Chen
In this paper, we introduce a POS tagging method for Taiwan Southern Min. We use the more than 62,000 entries of the Taiwanese-Mandarin dictionary and 10 million words of Mandarin training data to tag Taiwanese. The literary written Taiwanese corpora have both Romanized script and Han-Romanization mixed script, and include prose, novels, and dramas. We follow the tagset drawn up by CKIP. We developed a word alignment checker to assist with the word alignment for the two scripts. It searches the Taiwanese-Mandarin dictionary to find corresponding Mandarin candidate words, selects the most suitable Mandarin word using an HMM probabilistic model from the Mandarin training data, and tags the word using an MEMM classifier. We achieve an accuracy rate of 91.6% on Taiwanese POS tagging work, and we analyze the errors. We also discover some preliminary Taiwanese training data.
本文介绍了一种台湾南民语的词性标注方法。我们使用台语-普通话字典中超过62,000条的条目和1000万字的普通话训练数据来标注台湾人。台湾的文学语料库既有罗马化文字,也有汉罗马化混合文字,包括散文、小说和戏剧。我们遵循cip制定的标签集。我们开发了一个单词对齐检查器来协助两个脚本的单词对齐。在台语-普通话词典中搜索相应的普通话候选词,利用HMM概率模型从普通话训练数据中选择最合适的普通话词,并使用MEMM分类器对词进行标记。我们对台湾词类标注工作的准确率达到91.6%,并对错误进行了分析。我们还发现了一些初步的台湾训练数据。
{"title":"Modeling Taiwanese POS Tagging Using Statistical Methods and Mandarin Training Data","authors":"Un-Gian Iunn, Jia-hung Tai, K. Lau, Cheng-Yan Kao, Keh-Jiann Chen","doi":"10.30019/IJCLCLP.200909.0001","DOIUrl":"https://doi.org/10.30019/IJCLCLP.200909.0001","url":null,"abstract":"In this paper, we introduce a POS tagging method for Taiwan Southern Min. We use the more than 62,000 entries of the Taiwanese-Mandarin dictionary and 10 million words of Mandarin training data to tag Taiwanese. The literary written Taiwanese corpora have both Romanized script and Han-Romanization mixed script, and include prose, novels, and dramas. We follow the tagset drawn up by CKIP. We developed a word alignment checker to assist with the word alignment for the two scripts. It searches the Taiwanese-Mandarin dictionary to find corresponding Mandarin candidate words, selects the most suitable Mandarin word using an HMM probabilistic model from the Mandarin training data, and tags the word using an MEMM classifier. We achieve an accuracy rate of 91.6% on Taiwanese POS tagging work, and we analyze the errors. We also discover some preliminary Taiwanese training data.","PeriodicalId":436300,"journal":{"name":"Int. J. Comput. Linguistics Chin. Lang. Process.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124174636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic Recognition of Cantonese-English Code-Mixing Speech 粤英混码语音的自动识别
Pub Date : 2009-09-01 DOI: 10.30019/IJCLCLP.200909.0003
Joyce Y. C. Chan, Houwei Cao, P. Ching, Tan Lee
Code-mixing is a common phenomenon in bilingual societies. It refers to the intra-sentential switching of two different languages in a spoken utterance. This paper presents the first study on automatic recognition of Cantonese-English code-mixing speech, which is common in Hong Kong. This study starts with the design and compilation of code-mixing speech and text corpora. The problems of acoustic modeling, language modeling, and language boundary detection are investigated. Subsequently, a large-vocabulary code-mixing speech recognition system is developed based on a two-pass decoding algorithm. For acoustic modeling, it is shown that cross-lingual acoustic models are more appropriate than language-dependent models. The language models being used are character tri-grams, in which the embedded English words are grouped into a small number of classes. Language boundary detection is done either by exploiting the phonological and lexical differences between the two languages or is done based on the result of cross-lingual speech recognition. The language boundary information is used to re-score the hypothesized syllables or words in the decoding process. The proposed code-mixing speech recognition system attains the accuracies of 56.4% and 53.0% for the Cantonese syllables and English words in code-mixing utterances.
语码混淆是双语社会中普遍存在的现象。它指的是在一个口语话语中两种不同语言的句内转换。本文首次对香港常见的粤英混码语音进行了自动识别研究。本文从语码混合语音和文本语料库的设计和编写入手。研究了声学建模、语言建模和语言边界检测等问题。在此基础上,提出了一种基于两路译码算法的大词汇混码语音识别系统。对于声学建模,跨语言声学模型比依赖语言的模型更合适。使用的语言模型是字符三格,其中嵌入的英语单词被分组为少数类。语言边界检测要么利用两种语言之间的语音和词汇差异来完成,要么基于跨语言语音识别的结果来完成。在解码过程中,利用语言边界信息对假设的音节或单词进行重新评分。本文提出的混码语音识别系统对混码语音中的粤语音节和英语单词的识别准确率分别达到56.4%和53.0%。
{"title":"Automatic Recognition of Cantonese-English Code-Mixing Speech","authors":"Joyce Y. C. Chan, Houwei Cao, P. Ching, Tan Lee","doi":"10.30019/IJCLCLP.200909.0003","DOIUrl":"https://doi.org/10.30019/IJCLCLP.200909.0003","url":null,"abstract":"Code-mixing is a common phenomenon in bilingual societies. It refers to the intra-sentential switching of two different languages in a spoken utterance. This paper presents the first study on automatic recognition of Cantonese-English code-mixing speech, which is common in Hong Kong. This study starts with the design and compilation of code-mixing speech and text corpora. The problems of acoustic modeling, language modeling, and language boundary detection are investigated. Subsequently, a large-vocabulary code-mixing speech recognition system is developed based on a two-pass decoding algorithm. For acoustic modeling, it is shown that cross-lingual acoustic models are more appropriate than language-dependent models. The language models being used are character tri-grams, in which the embedded English words are grouped into a small number of classes. Language boundary detection is done either by exploiting the phonological and lexical differences between the two languages or is done based on the result of cross-lingual speech recognition. The language boundary information is used to re-score the hypothesized syllables or words in the decoding process. The proposed code-mixing speech recognition system attains the accuracies of 56.4% and 53.0% for the Cantonese syllables and English words in code-mixing utterances.","PeriodicalId":436300,"journal":{"name":"Int. J. Comput. Linguistics Chin. Lang. Process.","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126347014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 37
Fertility-based Source-Language-biased Inversion Transduction Grammar for Word Alignment 基于生育的源语言偏倚倒转转导语法的词对齐
Pub Date : 2009-03-01 DOI: 10.30019/IJCLCLP.200903.0001
Chung-Chi Huang, Jason J. S. Chang
We propose a version of Inversion Transduction Grammar (ITG) model with IBM-style notation of fertility to improve word-alignment performance. In our approach, binary context-free grammar rules of the source language, accompanied by orientation preferences of the target language and fertilities of words, are leveraged to construct a syntax-based statistical translation model. Our model, inherently possessing the characteristics of ITG restrictions and allowing for many consecutive words aligned to one and vice-versa, outperforms the Bracketing Transduction Grammar (BTG) model and GIZA++, a state-of-the-art word aligner, not only in alignment error rate (23% and 14% error reduction) but also in consistent phrase error rate (13% and 9% error reduction). Better performance in these two evaluation metrics suggests that, based on our word alignment result, more accurate phrase pairs may be acquired, leading to better machine translation quality.
我们提出了一种具有ibm风格的生育符号的反转转导语法(ITG)模型,以提高单词对齐性能。在我们的方法中,利用源语言的二进制上下文无关语法规则,伴随着目标语言的方向偏好和单词的丰富性,构建基于语法的统计翻译模型。我们的模型固有地具有ITG限制的特征,并允许许多连续的单词对齐到一个,反之亦然,优于Bracketing Transduction Grammar (BTG)模型和giz++,一个最先进的单词对齐器,不仅在对齐错误率(减少23%和14%的错误)上,而且在一致短语错误率(减少13%和9%的错误)上。这两个评价指标的更好表现表明,基于我们的词对齐结果,可以获得更准确的短语对,从而提高机器翻译质量。
{"title":"Fertility-based Source-Language-biased Inversion Transduction Grammar for Word Alignment","authors":"Chung-Chi Huang, Jason J. S. Chang","doi":"10.30019/IJCLCLP.200903.0001","DOIUrl":"https://doi.org/10.30019/IJCLCLP.200903.0001","url":null,"abstract":"We propose a version of Inversion Transduction Grammar (ITG) model with IBM-style notation of fertility to improve word-alignment performance. In our approach, binary context-free grammar rules of the source language, accompanied by orientation preferences of the target language and fertilities of words, are leveraged to construct a syntax-based statistical translation model. Our model, inherently possessing the characteristics of ITG restrictions and allowing for many consecutive words aligned to one and vice-versa, outperforms the Bracketing Transduction Grammar (BTG) model and GIZA++, a state-of-the-art word aligner, not only in alignment error rate (23% and 14% error reduction) but also in consistent phrase error rate (13% and 9% error reduction). Better performance in these two evaluation metrics suggests that, based on our word alignment result, more accurate phrase pairs may be acquired, leading to better machine translation quality.","PeriodicalId":436300,"journal":{"name":"Int. J. Comput. Linguistics Chin. Lang. Process.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116213187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing Text Readability Using Hierarchical Lexical Relations Retrieved from WordNet 利用从WordNet检索的分层词法关系评估文本可读性
Pub Date : 2009-03-01 DOI: 10.30019/IJCLCLP.200903.0003
Shu-Yen Lin, Cheng-chao Su, Yuda Lai, Li-Chin Yang, S. Hsieh
Although some traditional readability formulas have shown high predictive validity in the r=0.8 range and above (Chall & Dale, 1995), they are generally not based on genuine linguistic processing factors, but on statistical correlations (Crossley et al., 2008). Improvement of readability assessment should focus on finding variables that truly represent the comprehensibility of text as well as the indices that accurately measure the correlations. In this study, we explore the hierarchical relations between lexical items based on the conceptual categories advanced from Prototype Theory (Rosch et al., 1976). According to this theory and its development, basic level words like guitar represent the objects humans interact with most readily. They are acquired by children earlier than their superordinate words like stringed instrument and their subordinate words like acoustic guitar. Accordingly, the readability of a text is presumably associated with the ratio of basic level words it contains. WordNet (Fellbaum, 1998), a network of meaningfully related words, provides the best online open source database for studying such lexical relations. Our study shows that a basic level noun can be identified by its ratio of forming compounds (e.g. chair→armchair) and the length difference in relation to its hyponyms. We compared graded readings for American children and high school English readings for Taiwanese students by several readability formulas and in terms of basic level noun ratios (i.e. the number of basic level noun types divided by the number of noun types in a text). It is suggested that basic level noun ratios provide a robust and meaningful index of lexical complexity, which is directly associated with text readability.
虽然一些传统的可读性公式在r=0.8及以上的范围内显示出较高的预测效度(Chall & Dale, 1995),但它们通常不是基于真正的语言处理因素,而是基于统计相关性(Crossley et al., 2008)。可读性评价的改进应着眼于寻找真正代表文本可理解性的变量和准确衡量相关性的指标。在本研究中,我们在原型理论(Rosch et al., 1976)提出的概念范畴的基础上,探讨了词汇项目之间的层次关系。根据这一理论及其发展,像吉他这样的基本词汇代表了人类最容易接触的对象。他们比他们的上级词如弦乐器和下级词如原声吉他更早被儿童习得。因此,文本的可读性大概与它所包含的基本级单词的比例有关。WordNet (Fellbaum, 1998)是一个有意义相关词汇的网络,为研究这种词汇关系提供了最好的在线开源数据库。我们的研究表明,基本级名词可以通过其构成复合词的比例(例如chair→armchair)和相对于其下位词的长度差异来识别。我们用几个可读性公式和基础水平名词比率(即基础水平名词类型的数量除以文本中名词类型的数量)来比较美国儿童和台湾学生的高中英语阅读。本文认为,基本水平名词比例为词汇复杂性提供了一个可靠而有意义的指标,而词汇复杂性与文本可读性直接相关。
{"title":"Assessing Text Readability Using Hierarchical Lexical Relations Retrieved from WordNet","authors":"Shu-Yen Lin, Cheng-chao Su, Yuda Lai, Li-Chin Yang, S. Hsieh","doi":"10.30019/IJCLCLP.200903.0003","DOIUrl":"https://doi.org/10.30019/IJCLCLP.200903.0003","url":null,"abstract":"Although some traditional readability formulas have shown high predictive validity in the r=0.8 range and above (Chall & Dale, 1995), they are generally not based on genuine linguistic processing factors, but on statistical correlations (Crossley et al., 2008). Improvement of readability assessment should focus on finding variables that truly represent the comprehensibility of text as well as the indices that accurately measure the correlations. In this study, we explore the hierarchical relations between lexical items based on the conceptual categories advanced from Prototype Theory (Rosch et al., 1976). According to this theory and its development, basic level words like guitar represent the objects humans interact with most readily. They are acquired by children earlier than their superordinate words like stringed instrument and their subordinate words like acoustic guitar. Accordingly, the readability of a text is presumably associated with the ratio of basic level words it contains. WordNet (Fellbaum, 1998), a network of meaningfully related words, provides the best online open source database for studying such lexical relations. Our study shows that a basic level noun can be identified by its ratio of forming compounds (e.g. chair→armchair) and the length difference in relation to its hyponyms. We compared graded readings for American children and high school English readings for Taiwanese students by several readability formulas and in terms of basic level noun ratios (i.e. the number of basic level noun types divided by the number of noun types in a text). It is suggested that basic level noun ratios provide a robust and meaningful index of lexical complexity, which is directly associated with text readability.","PeriodicalId":436300,"journal":{"name":"Int. J. Comput. Linguistics Chin. Lang. Process.","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128067627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Corpus Cleanup of Mistaken Agreement Using Word Sense Disambiguation 用词义消歧法清理语料库中的一致性错误
Pub Date : 2008-12-01 DOI: 10.30019/IJCLCLP.200812.0002
Liang-Chih Yu, Chung-Hsien Wu, Jui-Feng Yeh, E. Hovy
Word sense annotated corpora are useful resources for many text mining applications. Such corpora are only useful if their annotations are consistent. Most large-scale annotation efforts take special measures to reconcile inter-annotator disagreement. To date, however, nobody has investigated how to automatically determine exemplars in which the annotators agree but are wrong. In this paper, we use OntoNotes, a large-scale corpus of semantic annotations, including word senses, predicate-argument structure, ontology linking, and coreference. To determine the mistaken agreements in word sense annotation, we employ word sense disambiguation (WSD) to select a set of suspicious candidates for human evaluation. Experiments are conducted from three aspects (precision, cost-effectiveness ratio, and entropy) to examine the performance of WSD. The experimental results show that WSD is most effective in identifying erroneous annotations for highly-ambiguous words, while a baseline is better for other cases. The two methods can be combined to improve the cleanup process. This procedure allows us to find approximately 2% of the remaining erroneous agreements in the OntoNotes corpus. A similar procedure can be easily defined to check other annotated corpora.
词义注释语料库是许多文本挖掘应用程序的有用资源。这样的语料库只有在注释一致的情况下才有用。大多数大规模注释工作都采取特殊措施来调和注释者之间的分歧。然而,到目前为止,还没有人研究过如何自动确定注释者同意但错误的范例。在本文中,我们使用了OntoNotes,一个大规模的语义注释语料库,包括词义,谓词-参数结构,本体链接和共引用。为了确定词义注释中的错误一致,我们使用词义消歧(WSD)来选择一组可疑的候选词进行人工评估。实验从精度、成本效益比和熵三个方面检验了WSD的性能。实验结果表明,WSD在识别高度模糊词的错误注释时最有效,而基线在识别其他情况下效果更好。这两种方法可以结合起来改善清理过程。这个程序允许我们在OntoNotes语料库中找到大约2%的剩余错误协议。可以很容易地定义一个类似的过程来检查其他带注释的语料库。
{"title":"Corpus Cleanup of Mistaken Agreement Using Word Sense Disambiguation","authors":"Liang-Chih Yu, Chung-Hsien Wu, Jui-Feng Yeh, E. Hovy","doi":"10.30019/IJCLCLP.200812.0002","DOIUrl":"https://doi.org/10.30019/IJCLCLP.200812.0002","url":null,"abstract":"Word sense annotated corpora are useful resources for many text mining applications. Such corpora are only useful if their annotations are consistent. Most large-scale annotation efforts take special measures to reconcile inter-annotator disagreement. To date, however, nobody has investigated how to automatically determine exemplars in which the annotators agree but are wrong. In this paper, we use OntoNotes, a large-scale corpus of semantic annotations, including word senses, predicate-argument structure, ontology linking, and coreference. To determine the mistaken agreements in word sense annotation, we employ word sense disambiguation (WSD) to select a set of suspicious candidates for human evaluation. Experiments are conducted from three aspects (precision, cost-effectiveness ratio, and entropy) to examine the performance of WSD. The experimental results show that WSD is most effective in identifying erroneous annotations for highly-ambiguous words, while a baseline is better for other cases. The two methods can be combined to improve the cleanup process. This procedure allows us to find approximately 2% of the remaining erroneous agreements in the OntoNotes corpus. A similar procedure can be easily defined to check other annotated corpora.","PeriodicalId":436300,"journal":{"name":"Int. J. Comput. Linguistics Chin. Lang. Process.","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115990115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical Taxonomy Integration Using Semantic Feature Expansion on Category-Specific Terms 基于特定类别术语语义特征扩展的分层分类法集成
Pub Date : 2008-12-01 DOI: 10.30019/IJCLCLP.200812.0003
Cheng-Zen Yang, Ing-Xiang Chen, Cheng-Tse Hung, Ping-Jung Wu
In recent years, the hierarchical taxonomy integration problem has obtained considerable attention in many research studies. Many types of implicit information embedded in the source taxonomy are explored to improve the integration performance. The semantic information embedded in the source taxonomy, however, has not been discussed in previous research. In this paper, an enhanced integration approach called SFE (Semantic Feature Expansion) is proposed to exploit the semantic information of the category-specific terms. From our experiments on two hierarchical Web taxonomies, the results show that the integration performance can be further improved with the SFE scheme.
近年来,层次分类法的集成问题在许多研究中得到了相当大的关注。为了提高集成性能,研究了在源分类法中嵌入多种类型的隐式信息。然而,在以往的研究中,对源分类中嵌入的语义信息并没有进行讨论。本文提出了一种增强的集成方法——语义特征扩展(Semantic Feature Expansion, SFE),用于挖掘特定类别术语的语义信息。在两种层次Web分类法上的实验结果表明,SFE方案可以进一步提高集成性能。
{"title":"Hierarchical Taxonomy Integration Using Semantic Feature Expansion on Category-Specific Terms","authors":"Cheng-Zen Yang, Ing-Xiang Chen, Cheng-Tse Hung, Ping-Jung Wu","doi":"10.30019/IJCLCLP.200812.0003","DOIUrl":"https://doi.org/10.30019/IJCLCLP.200812.0003","url":null,"abstract":"In recent years, the hierarchical taxonomy integration problem has obtained considerable attention in many research studies. Many types of implicit information embedded in the source taxonomy are explored to improve the integration performance. The semantic information embedded in the source taxonomy, however, has not been discussed in previous research. In this paper, an enhanced integration approach called SFE (Semantic Feature Expansion) is proposed to exploit the semantic information of the category-specific terms. From our experiments on two hierarchical Web taxonomies, the results show that the integration performance can be further improved with the SFE scheme.","PeriodicalId":436300,"journal":{"name":"Int. J. Comput. Linguistics Chin. Lang. Process.","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124011093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic Wikibook Prototyping via Mining Wikipedia 通过挖掘维基百科自动创建维基教科书原型
Pub Date : 2008-12-01 DOI: 10.30019/IJCLCLP.200812.0004
Jen-Liang Chou, Shih-Hung Wu
Wikipedia is the world's largest collaboratively edited source of encyclopedic knowledge. Wikibook is a sub-project of Wikipedia that is intended to create a book that can be edited by various contributors, similar to how Wikipedia is composed and edited. Editing a book, however, requires more effort than editing separate articles. Therefore, methods of quickly prototyping a book is a new research issue. In this paper, we investigate how to automatically extract content from Wikipedia and generate a prototype of a Wikibook as a start point for further editing. Applying search technology, our system can retrieve relevant articles from Wikipedia. A table of contents is built automatically and is based on a two-stage searching method. Our experiments show that, given a keyword as the title of a book, our system can generate a table of contents, which can be treated as a prototype of a Wikibook. Such a system can help free textbook editing. We propose an evaluation method based on the comparison of system results to a traditional textbook and show the coverage of our system.
维基百科是世界上最大的协同编辑的百科知识来源。Wikibook是维基百科的一个子项目,旨在创建一本可以由不同贡献者编辑的书,类似于维基百科的编写和编辑方式。然而,编辑一本书比编辑单独的文章需要更多的努力。因此,如何快速制作一本书的原型是一个新的研究课题。在本文中,我们研究了如何自动从维基百科中提取内容,并生成一个维基教科书的原型,作为进一步编辑的起点。应用搜索技术,系统可以从维基百科中检索相关文章。自动构建目录表,并基于两阶段搜索方法。我们的实验表明,给定一个关键词作为一本书的标题,我们的系统可以生成一个目录,这可以被视为一个维基教科书的原型。这样的系统可以帮助免费的教科书编辑。提出了一种基于系统结果与传统教材对比的评价方法,并展示了系统的覆盖范围。
{"title":"Automatic Wikibook Prototyping via Mining Wikipedia","authors":"Jen-Liang Chou, Shih-Hung Wu","doi":"10.30019/IJCLCLP.200812.0004","DOIUrl":"https://doi.org/10.30019/IJCLCLP.200812.0004","url":null,"abstract":"Wikipedia is the world's largest collaboratively edited source of encyclopedic knowledge. Wikibook is a sub-project of Wikipedia that is intended to create a book that can be edited by various contributors, similar to how Wikipedia is composed and edited. Editing a book, however, requires more effort than editing separate articles. Therefore, methods of quickly prototyping a book is a new research issue. In this paper, we investigate how to automatically extract content from Wikipedia and generate a prototype of a Wikibook as a start point for further editing. Applying search technology, our system can retrieve relevant articles from Wikipedia. A table of contents is built automatically and is based on a two-stage searching method. Our experiments show that, given a keyword as the title of a book, our system can generate a table of contents, which can be treated as a prototype of a Wikibook. Such a system can help free textbook editing. We propose an evaluation method based on the comparison of system results to a traditional textbook and show the coverage of our system.","PeriodicalId":436300,"journal":{"name":"Int. J. Comput. Linguistics Chin. Lang. Process.","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117026357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature Weighting Random Forest for Detection of Hidden Web Search Interfaces 基于特征加权随机森林的隐藏Web搜索界面检测
Pub Date : 2008-12-01 DOI: 10.30019/IJCLCLP.200812.0001
Yunming Ye, Hongbo Li, Xiaobai Deng, J. Huang
Search interface detection is an essential task for extracting information from the hidden Web. The challenge for this task is that search interface data is represented in high-dimensional and sparse features with many missing values. This paper presents a new multi-classifier ensemble approach to solving this problem. In this approach, we have extended the random forest algorithm with a weighted feature selection method to build the individual classifiers. With this improved random forest algorithm (IRFA), each classifier can be learned from a weighted subset of the feature space so that the ensemble of decision trees can fully exploit the useful features of search interface patterns. We have compared our ensemble approach with other well-known classification algorithms, such as SVM, C4.5, Naive Bayes, and original random forest algorithm (RFA). The experimental results have shown that our method is more effective in detecting search interfaces of the hidden Web.
搜索接口检测是从隐藏网络中提取信息的一项重要任务。该任务的挑战在于,搜索界面数据以高维和稀疏特征表示,其中有许多缺失值。本文提出了一种新的多分类器集成方法来解决这一问题。在此方法中,我们扩展了随机森林算法,使用加权特征选择方法来构建单个分类器。通过改进的随机森林算法(IRFA),每个分类器可以从特征空间的加权子集中学习,从而使决策树的集合能够充分利用搜索界面模式的有用特征。我们将我们的集成方法与其他知名的分类算法进行了比较,例如SVM、C4.5、朴素贝叶斯和原始随机森林算法(RFA)。实验结果表明,该方法对隐藏Web的搜索接口检测更为有效。
{"title":"Feature Weighting Random Forest for Detection of Hidden Web Search Interfaces","authors":"Yunming Ye, Hongbo Li, Xiaobai Deng, J. Huang","doi":"10.30019/IJCLCLP.200812.0001","DOIUrl":"https://doi.org/10.30019/IJCLCLP.200812.0001","url":null,"abstract":"Search interface detection is an essential task for extracting information from the hidden Web. The challenge for this task is that search interface data is represented in high-dimensional and sparse features with many missing values. This paper presents a new multi-classifier ensemble approach to solving this problem. In this approach, we have extended the random forest algorithm with a weighted feature selection method to build the individual classifiers. With this improved random forest algorithm (IRFA), each classifier can be learned from a weighted subset of the feature space so that the ensemble of decision trees can fully exploit the useful features of search interface patterns. We have compared our ensemble approach with other well-known classification algorithms, such as SVM, C4.5, Naive Bayes, and original random forest algorithm (RFA). The experimental results have shown that our method is more effective in detecting search interfaces of the hidden Web.","PeriodicalId":436300,"journal":{"name":"Int. J. Comput. Linguistics Chin. Lang. Process.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127056356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Knowledge Representation and Sense Disambiguation for Interrogatives in E-HowNet E-HowNet中疑问句的知识表示与语义消歧
Pub Date : 2008-09-01 DOI: 10.30019/IJCLCLP.200809.0001
Shu-Ling Huang, Keh-Jiann Chen
In order to train machines to ‘understand’ natural language, we propose a meaning representation mechanism called E-HowNet to encode lexical senses. In this paper, we take interrogatives as examples to demonstrate the mechanisms of semantic representation and composition of interrogative constructions under the framework of E-HowNet. We classify the interrogative words into five classes according to their query types, and represent each type of interrogatives with fine-grained features and operators. The process of semantic composition and the difficulties of representation, such as word sense disambiguation, are addressed. Finally, machine understanding is tested by showing how machines derive the same deep semantic structure for synonymous sentences with different surface structures.
为了训练机器“理解”自然语言,我们提出了一种称为E-HowNet的意义表示机制来编码词法意义。本文以疑问句为例,探讨了在E-HowNet框架下疑问句结构的语义表征和组成机制。我们根据疑问词的查询类型将疑问词分为五类,并用细粒度特征和操作符表示每一类疑问词。讨论了语义合成的过程和语义表示的难点,如词义消歧。最后,通过展示机器如何为具有不同表面结构的同义句子派生相同的深层语义结构来测试机器理解。
{"title":"Knowledge Representation and Sense Disambiguation for Interrogatives in E-HowNet","authors":"Shu-Ling Huang, Keh-Jiann Chen","doi":"10.30019/IJCLCLP.200809.0001","DOIUrl":"https://doi.org/10.30019/IJCLCLP.200809.0001","url":null,"abstract":"In order to train machines to ‘understand’ natural language, we propose a meaning representation mechanism called E-HowNet to encode lexical senses. In this paper, we take interrogatives as examples to demonstrate the mechanisms of semantic representation and composition of interrogative constructions under the framework of E-HowNet. We classify the interrogative words into five classes according to their query types, and represent each type of interrogatives with fine-grained features and operators. The process of semantic composition and the difficulties of representation, such as word sense disambiguation, are addressed. Finally, machine understanding is tested by showing how machines derive the same deep semantic structure for synonymous sentences with different surface structures.","PeriodicalId":436300,"journal":{"name":"Int. J. Comput. Linguistics Chin. Lang. Process.","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116135423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improved Minimum Phone Error based Discriminative Training of Acoustic Models for Mandarin Large Vocabulary Continuous Speech Recognition 基于声学模型判别训练的普通话大词汇量连续语音识别
Pub Date : 2008-09-01 DOI: 10.30019/IJCLCLP.200809.0005
Shih-Hung Liu, Fang-Hui Chu, Yueng-Tien Lo, Berlin Chen
This paper considers minimum phone error (MPE) based discriminative training of acoustic models for Mandarin broadcast news recognition. We present a new phone accuracy function based on the frame-level accuracy of hypothesized phone arcs instead of using the raw phone accuracy function of MPE training. Moreover, a novel data selection approach based on the frame-level normalized entropy of Gaussian posterior probabilities obtained from the word lattice of the training utterance is explored. It has the merit of making the training algorithm focus much more on the training statistics of those frame samples that center nearly around the decision boundary for better discrimination. The underlying characteristics of the presented approaches are extensively investigated, and their performance is verified by comparison with the standard MPE training approach as well as the other related work. Experiments conducted on broadcast news collected in Taiwan demonstrate that the integration of the frame-level phone accuracy calculation and data selection yields slight but consistent improvements over the baseline system.
本文研究了基于最小电话误差(MPE)的声音模型判别训练方法在普通话广播新闻识别中的应用。我们提出了一种新的基于假设的手机弧线帧级精度的手机精度函数,而不是使用MPE训练的原始手机精度函数。此外,本文还探索了一种基于从训练话语的词格中获得的高斯后验概率的帧级归一化熵的数据选择方法。它的优点是使训练算法更多地关注那些接近决策边界的帧样本的训练统计量,以获得更好的区分。本文对所提出的方法的基本特征进行了广泛的研究,并通过与标准的MPE训练方法以及其他相关工作的比较验证了它们的性能。在台湾收集的广播新闻上进行的实验表明,帧级电话精度计算和数据选择的集成比基线系统产生了轻微但一致的改进。
{"title":"Improved Minimum Phone Error based Discriminative Training of Acoustic Models for Mandarin Large Vocabulary Continuous Speech Recognition","authors":"Shih-Hung Liu, Fang-Hui Chu, Yueng-Tien Lo, Berlin Chen","doi":"10.30019/IJCLCLP.200809.0005","DOIUrl":"https://doi.org/10.30019/IJCLCLP.200809.0005","url":null,"abstract":"This paper considers minimum phone error (MPE) based discriminative training of acoustic models for Mandarin broadcast news recognition. We present a new phone accuracy function based on the frame-level accuracy of hypothesized phone arcs instead of using the raw phone accuracy function of MPE training. Moreover, a novel data selection approach based on the frame-level normalized entropy of Gaussian posterior probabilities obtained from the word lattice of the training utterance is explored. It has the merit of making the training algorithm focus much more on the training statistics of those frame samples that center nearly around the decision boundary for better discrimination. The underlying characteristics of the presented approaches are extensively investigated, and their performance is verified by comparison with the standard MPE training approach as well as the other related work. Experiments conducted on broadcast news collected in Taiwan demonstrate that the integration of the frame-level phone accuracy calculation and data selection yields slight but consistent improvements over the baseline system.","PeriodicalId":436300,"journal":{"name":"Int. J. Comput. Linguistics Chin. Lang. Process.","volume":"305 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123120802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Int. J. Comput. Linguistics Chin. Lang. Process.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1