首页 > 最新文献

Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)最新文献

英文 中文
Information retrieval by text summarization for an Indian regional language 一种印度地区语言的文本摘要信息检索
Jagadish S. Kallimani, K. Srinivasa, B. E. Reddy
The Information Extraction is a method for filtering information from large volumes of text. Information Extraction is a limited task than full text understanding. In full text understanding, we aspire to represent in an explicit fashion about all the information in a text. In contrast, in Information Extraction, we delimit in advance, as part of the specification of the task and the semantic range of the output. In this paper, a model for summarization from large documents using a novel approach has been proposed. Extending the work for an Indian regional language (Kannada) and various analyses of results were discussed.
信息抽取是一种从大量文本中过滤信息的方法。与全文理解相比,信息提取是一项有限的任务。在全文理解中,我们希望以一种明确的方式表达文本中的所有信息。相比之下,在信息提取中,我们提前划分,作为任务规范和输出语义范围的一部分。本文提出了一种利用新方法对大型文档进行摘要的模型。讨论了扩展印度地区语言(卡纳达语)的工作和对结果的各种分析。
{"title":"Information retrieval by text summarization for an Indian regional language","authors":"Jagadish S. Kallimani, K. Srinivasa, B. E. Reddy","doi":"10.1109/NLPKE.2010.5587764","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587764","url":null,"abstract":"The Information Extraction is a method for filtering information from large volumes of text. Information Extraction is a limited task than full text understanding. In full text understanding, we aspire to represent in an explicit fashion about all the information in a text. In contrast, in Information Extraction, we delimit in advance, as part of the specification of the task and the semantic range of the output. In this paper, a model for summarization from large documents using a novel approach has been proposed. Extending the work for an Indian regional language (Kannada) and various analyses of results were discussed.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129165516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Improving emotion recognition from text with fractionation training
Ye Wu, F. Ren
Previous approaches of emotion recognition from text were mostly implemented under keyword-based or learning-based frameworks. However, keyword-based systems are unable to recognize emotion from text with no emotional keywords, and constructing an emotion lexicon is a tough work because of ambiguity in defining all emotional keywords. Completely prior-knowledge-free supervised machine learning methods for emotion recognition also do not perform as well as on some traditional tasks. In this paper, a fractionation training approach is proposed, utilizing the emotion lexicon extracted from an annotated blog emotion corpus to train SVM classifiers. Experimental results show the effectiveness of the proposed approach, and the use of some other experimental design also improves the classification accuracy.
以往的文本情感识别方法大多是在基于关键词或基于学习的框架下实现的。然而,基于关键字的系统无法从没有情感关键字的文本中识别情感,并且由于所有情感关键字的定义都存在歧义,因此构建情感词典是一项艰巨的工作。完全无先验知识的监督机器学习方法在情感识别方面的表现也不如一些传统任务。本文提出了一种分类训练方法,利用从带注释的博客情感语料库中提取的情感词汇来训练支持向量机分类器。实验结果表明了所提方法的有效性,同时采用其他一些实验设计也提高了分类精度。
{"title":"Improving emotion recognition from text with fractionation training","authors":"Ye Wu, F. Ren","doi":"10.1109/NLPKE.2010.5587800","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587800","url":null,"abstract":"Previous approaches of emotion recognition from text were mostly implemented under keyword-based or learning-based frameworks. However, keyword-based systems are unable to recognize emotion from text with no emotional keywords, and constructing an emotion lexicon is a tough work because of ambiguity in defining all emotional keywords. Completely prior-knowledge-free supervised machine learning methods for emotion recognition also do not perform as well as on some traditional tasks. In this paper, a fractionation training approach is proposed, utilizing the emotion lexicon extracted from an annotated blog emotion corpus to train SVM classifiers. Experimental results show the effectiveness of the proposed approach, and the use of some other experimental design also improves the classification accuracy.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128574637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
MT on and for the Web MT上和为网络
C. Boitet, H. Blanchon, Mark Seligman, Valérie Bellynck
A Systran MT server became available on the minitel network in 1984, and on Internet in 1994. Since then we have come to a better understanding of the nature of MT systems by separately analyzing their linguistic, computational, and operational architectures. Also, thanks to the CxAxQ metatheorem, the systems' inherent limits have been clarified, and design choices can now be made in an informed manner according to the translation situations. MT evaluation has also matured: tools based on reference translations are useful for measuring progress; those based on subjective judgments for estimating future usage quality; and task-related objective measures (such as post-editing distances) for measuring operational quality. Moreover, the same technological advances that have led to “Web 2.0” have brought several futuristic predictions to fruition. Free Web MT services have democratized assimilation MT beyond belief. Speech translation research has given rise to usable systems for restricted tasks running on PDAs or on mobile phones connected to servers. New man-machine interface techniques have made interactive disambiguation usable in large-coverage multimodal MT. Increases in computing power have made statistical methods workable, and have led to the possibility of building low-linguistic-quality but still useful MT systems by machine learning from aligned bilingual corpora (SMT, EBMT). In parallel, progress has been made in developing interlingua-based MT systems, using hybrid methods. Unfortunately, many misconceptions about MT have spread among the public, and even among MT researchers, because of ignorance of the past and present of MT R&D. A compensating factor is the willingness of end users to freely contribute to building essential parts of the linguistic knowledge needed to construct MT systems, whether corpus-related or lexical. Finally, some developments we anticipated fifteen years ago have not yet materialized, such as online writing tools equipped with interactive disambiguation, and as a corollary the possibility of transforming source documents into self-explaining documents (SEDs) and of producing corresponding SEDs fully automatically in several target languages. These visions should now be realized, thanks to the evolution of Web programming and multilingual NLP techniques, leading towards a true Semantic Web, “Web 3.0”, which will support ubilingual (ubiquitous multilingual) computing.
1984年,一个systeman MT服务器在迷你网络上可用,1994年在Internet上可用。从那时起,我们通过分别分析其语言、计算和操作架构,对机器翻译系统的本质有了更好的理解。此外,由于CxAxQ元定理,系统的固有限制已经被澄清,并且现在可以根据翻译情况以明智的方式做出设计选择。机器翻译评估也已经成熟:基于参考翻译的工具对于衡量进展是有用的;基于主观判断来估计未来使用质量的;以及与任务相关的客观度量(如后期编辑距离),用于度量操作质量。此外,同样的技术进步导致了Web 2.0&# x201C;实现了几个未来主义的预言。免费的网络机器翻译服务已经使机器翻译民主化。语音翻译研究已经产生了可用于在pda或连接到服务器的移动电话上运行的受限任务的可用系统。新的人机界面技术使交互式消歧在大范围多模态机器翻译中可用。计算能力的提高使统计方法可行,并通过对齐双语语料库(SMT, EBMT)的机器学习,使构建低语言质量但仍然有用的机器翻译系统成为可能。同时,使用混合方法开发基于语言间的机器翻译系统也取得了进展。不幸的是,由于对机器翻译研发的过去和现在的无知,许多关于机器翻译的误解在公众中传播,甚至在机器翻译研究人员中传播。一个补偿因素是最终用户愿意自由地为构建机器翻译系统所需的语言知识的基本部分做出贡献,无论是与语料库相关的还是词汇的。最后,我们在15年前预测的一些发展尚未实现,例如配备交互式消歧的在线写作工具,以及将源文档转换为自解释文档(sed)的必然可能性,以及以几种目标语言完全自动生成相应的sed的可能性。由于Web编程和多语言NLP技术的发展,这些愿景现在应该实现了,它们将导致真正的语义网(Web 3.0),它将支持非双语(无处不在的多语言)计算。
{"title":"MT on and for the Web","authors":"C. Boitet, H. Blanchon, Mark Seligman, Valérie Bellynck","doi":"10.1109/NLPKE.2010.5587865","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587865","url":null,"abstract":"A Systran MT server became available on the minitel network in 1984, and on Internet in 1994. Since then we have come to a better understanding of the nature of MT systems by separately analyzing their linguistic, computational, and operational architectures. Also, thanks to the CxAxQ metatheorem, the systems' inherent limits have been clarified, and design choices can now be made in an informed manner according to the translation situations. MT evaluation has also matured: tools based on reference translations are useful for measuring progress; those based on subjective judgments for estimating future usage quality; and task-related objective measures (such as post-editing distances) for measuring operational quality. Moreover, the same technological advances that have led to “Web 2.0” have brought several futuristic predictions to fruition. Free Web MT services have democratized assimilation MT beyond belief. Speech translation research has given rise to usable systems for restricted tasks running on PDAs or on mobile phones connected to servers. New man-machine interface techniques have made interactive disambiguation usable in large-coverage multimodal MT. Increases in computing power have made statistical methods workable, and have led to the possibility of building low-linguistic-quality but still useful MT systems by machine learning from aligned bilingual corpora (SMT, EBMT). In parallel, progress has been made in developing interlingua-based MT systems, using hybrid methods. Unfortunately, many misconceptions about MT have spread among the public, and even among MT researchers, because of ignorance of the past and present of MT R&D. A compensating factor is the willingness of end users to freely contribute to building essential parts of the linguistic knowledge needed to construct MT systems, whether corpus-related or lexical. Finally, some developments we anticipated fifteen years ago have not yet materialized, such as online writing tools equipped with interactive disambiguation, and as a corollary the possibility of transforming source documents into self-explaining documents (SEDs) and of producing corresponding SEDs fully automatically in several target languages. These visions should now be realized, thanks to the evolution of Web programming and multilingual NLP techniques, leading towards a true Semantic Web, “Web 3.0”, which will support ubilingual (ubiquitous multilingual) computing.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120994756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Marine literature categorization based on minimizing the labelled data 基于标记数据最小化的海洋文献分类
Wei Zhang, Qiuhong Wang, Yeheng Deng, R. Du
In marine literature categorization, supervised machine learning method will take a lot of time for labelling the samples by hand. So we utilize Co-training method to decrease the quantities of labelled samples needed for training the classifier. In this paper, we only select features from the text details and add attribute labels to them. It can greatly boost the efficiency of text processing. For building up two views, we split features into two parts, each of which can form an independent view. One view is made up of the feature set of abstract, and the other is made up of the feature sets of title, keywords, creator and department. In experiments, the F1 value and error rate of the categorization system could reach about 0.863 and 14.26%.They are close to the performance of supervised classifier (0.902 and 9.13%), which is trained by more than 1500 labelled samples, however, the labelled samples used by Co-training categorization method to train the original classifier are only one positive sample and one negative sample. In addition we consider joining the idea of the active-learning in Co-training method.
在海洋文献分类中,有监督的机器学习方法需要花费大量的时间手工标记样本。因此,我们利用协同训练方法来减少训练分类器所需的标记样本数量。在本文中,我们只从文本细节中选择特征并为其添加属性标签。它可以大大提高文本处理的效率。为了构建两个视图,我们将特征分成两个部分,每个部分都可以形成一个独立的视图。一个视图由抽象的特征集组成,另一个视图由标题、关键词、创建者和部门的特征集组成。在实验中,分类系统的F1值和错误率分别达到0.863和14.26%左右。它们接近监督分类器的性能(0.902和9.13%),后者是由1500多个标记样本训练而成的,而协同训练分类方法训练原始分类器使用的标记样本只有一个正样本和一个负样本。此外,我们还考虑在联合训练方法中加入主动学习的思想。
{"title":"Marine literature categorization based on minimizing the labelled data","authors":"Wei Zhang, Qiuhong Wang, Yeheng Deng, R. Du","doi":"10.1109/NLPKE.2010.5587847","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587847","url":null,"abstract":"In marine literature categorization, supervised machine learning method will take a lot of time for labelling the samples by hand. So we utilize Co-training method to decrease the quantities of labelled samples needed for training the classifier. In this paper, we only select features from the text details and add attribute labels to them. It can greatly boost the efficiency of text processing. For building up two views, we split features into two parts, each of which can form an independent view. One view is made up of the feature set of abstract, and the other is made up of the feature sets of title, keywords, creator and department. In experiments, the F1 value and error rate of the categorization system could reach about 0.863 and 14.26%.They are close to the performance of supervised classifier (0.902 and 9.13%), which is trained by more than 1500 labelled samples, however, the labelled samples used by Co-training categorization method to train the original classifier are only one positive sample and one negative sample. In addition we consider joining the idea of the active-learning in Co-training method.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115182144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Web-based technical term translation pairs mining for patent document translation 基于web的技术术语翻译对挖掘,用于专利文献翻译
Feiliang Ren, Jingbo Zhu, Huizhen Wang
This paper proposes a simple but powerful approach for obtaining technical term translation pairs in patent domain from Web automatically. First, several technical terms are used as seed queries and submitted to search engineering. Secondly, an extraction algorithm is proposed to extract some key word translation pairs from the returned web pages. Finally, a multi-feature based evaluation method is proposed to pick up those translation pairs that are true technical term translation pairs in patent domain. With this method, we obtain about 8,890,000 key word translation pairs which can be used to translate the technical terms in patent documents. And experimental results show that the precision of these translation pairs are more than 99%, and the coverage of these translation pairs for the technical terms in patent documents are more than 84%.
本文提出了一种简单而有效的从Web上自动获取专利领域专业术语翻译对的方法。首先,使用几个技术术语作为种子查询并提交给搜索工程。其次,提出了一种提取算法,从返回的网页中提取关键字翻译对。最后,提出了一种基于多特征的评价方法,以提取专利领域中真正的技术术语翻译对。利用该方法,我们获得了约8890000个关键词翻译对,这些关键词翻译对可用于翻译专利文献中的技术术语。实验结果表明,这些翻译对的翻译精度在99%以上,对专利文献中技术术语的翻译覆盖率在84%以上。
{"title":"Web-based technical term translation pairs mining for patent document translation","authors":"Feiliang Ren, Jingbo Zhu, Huizhen Wang","doi":"10.1109/NLPKE.2010.5587775","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587775","url":null,"abstract":"This paper proposes a simple but powerful approach for obtaining technical term translation pairs in patent domain from Web automatically. First, several technical terms are used as seed queries and submitted to search engineering. Secondly, an extraction algorithm is proposed to extract some key word translation pairs from the returned web pages. Finally, a multi-feature based evaluation method is proposed to pick up those translation pairs that are true technical term translation pairs in patent domain. With this method, we obtain about 8,890,000 key word translation pairs which can be used to translate the technical terms in patent documents. And experimental results show that the precision of these translation pairs are more than 99%, and the coverage of these translation pairs for the technical terms in patent documents are more than 84%.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121554019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Recognition of abnormal vibrational responses of signposts using the Two-dimensional Geometric Distance and Wilcoxon test 利用二维几何距离和Wilcoxon检验识别路标的异常振动响应
M. Jinnai, Y. Akashi, S. Nakaya, F. Ren, M. Fukumi
In expressway companies, workers have been impacting signposts using wooden hammers and estimating the degree of the corrosion by listening to the sound. In order to automate this, we have been developing software that recognizes an abnormal impact vibrational response due to corrosion. This software extracts sonograms from impact vibrational waves using the LPC spectrum analysis, and matches images of the sonogram between a standard and an input impact vibrations using the Two-dimensional Geometric Distance. Then, the software distinguishes the abnormality of the input impact vibration using Wilcoxon rank-sum test. We have measured the impact vibrations of five normal signposts and five abnormal signposts, and carried out the automatic recognition experiments. As a result, the software has recognized correctly in all cases. We have verified the effectiveness of the proposed method.
在高速公路公司,工人们一直在用木锤敲击路标,并通过听声音来估计腐蚀的程度。为了实现自动化,我们一直在开发一种软件,可以识别由腐蚀引起的异常冲击振动响应。该软件使用LPC频谱分析从冲击振动波中提取超声图,并使用二维几何距离在标准和输入冲击振动之间匹配超声图图像。然后,利用Wilcoxon秩和检验对输入的冲击振动进行异常识别。测量了5个正常路标和5个异常路标的冲击振动,并进行了自动识别实验。结果,该软件在所有情况下都能正确识别。我们已经验证了所提出方法的有效性。
{"title":"Recognition of abnormal vibrational responses of signposts using the Two-dimensional Geometric Distance and Wilcoxon test","authors":"M. Jinnai, Y. Akashi, S. Nakaya, F. Ren, M. Fukumi","doi":"10.1109/NLPKE.2010.5587837","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587837","url":null,"abstract":"In expressway companies, workers have been impacting signposts using wooden hammers and estimating the degree of the corrosion by listening to the sound. In order to automate this, we have been developing software that recognizes an abnormal impact vibrational response due to corrosion. This software extracts sonograms from impact vibrational waves using the LPC spectrum analysis, and matches images of the sonogram between a standard and an input impact vibrations using the Two-dimensional Geometric Distance. Then, the software distinguishes the abnormality of the input impact vibration using Wilcoxon rank-sum test. We have measured the impact vibrations of five normal signposts and five abnormal signposts, and carried out the automatic recognition experiments. As a result, the software has recognized correctly in all cases. We have verified the effectiveness of the proposed method.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116781424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Using cognitive model to automatically analyze Chinese predicate 基于认知模型的汉语谓词自动分析
Shiqi Li, T. Zhao, Hanjing Li, Shui Liu, Pengyuan Liu
This paper presents an cognitive approach to semantic role labeling in Chinese based on an extension of Construction-Integration (CI) model. The method can implicitly integrate more contextual and general knowledge into the calculating process in contrast with the machine learning methods. First, we define a proposition representation as the basic unit for semantic role labeling using CI model. Then the contextually appropriate propositions will be strengthened and inappropriate ones will be inhibited by simulating the spreading activation of human mind. Finally, experimental results show an encouraging performance on Chinese PropBank (CPB) and other two datasets.
本文提出了一种基于构建-集成(CI)模型扩展的汉语语义角色标注认知方法。与机器学习方法相比,该方法可以隐式地将更多的上下文和一般知识集成到计算过程中。首先,我们定义了一个命题表示作为语义角色标注的基本单元。然后通过模拟人脑的扩张性激活,强化情境适宜命题,抑制情境不适宜命题。最后,实验结果表明,在中文PropBank (CPB)和其他两个数据集上取得了令人鼓舞的性能。
{"title":"Using cognitive model to automatically analyze Chinese predicate","authors":"Shiqi Li, T. Zhao, Hanjing Li, Shui Liu, Pengyuan Liu","doi":"10.1109/NLPKE.2010.5587843","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587843","url":null,"abstract":"This paper presents an cognitive approach to semantic role labeling in Chinese based on an extension of Construction-Integration (CI) model. The method can implicitly integrate more contextual and general knowledge into the calculating process in contrast with the machine learning methods. First, we define a proposition representation as the basic unit for semantic role labeling using CI model. Then the contextually appropriate propositions will be strengthened and inappropriate ones will be inhibited by simulating the spreading activation of human mind. Finally, experimental results show an encouraging performance on Chinese PropBank (CPB) and other two datasets.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117129179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computerized electronic nursing staffs' daily records system in the “A” psychiatric hospital: Present situation and future prospects “A”精神病医院护理人员计算机电子日常记录系统的现状与展望
T. Tanioka, A. Kawamura, Mai Date, K. Osaka, Yuko Yasuhara, M. Kataoka, Yukie Iwasa, Toshihiro Sugiyama, Kazuyuki Matsumoto, Tomoko Kawata, Misako Satou, K. Mifune
At the “A” psychiatric hospital, previously nurses used paper-based nursing staffs' daily records. We aimed to manage the higher quality nursing and introduced “electronic management system for nursing staffs' daily records system (ENSDR)” interlocked with “Psychoms ®” into this hospital. Some good effects were achieved by introducing this system. However, some problems have been left in this system. The purpose of this study is to evaluate the current situation and challenges which brought out by using ENSDR, and to indicate the future direction of the development.
在“A”精神病医院,以前护士使用纸质护理人员的日常记录。我们以管理更高质量的护理为目标,将“护理人员日常记录电子管理系统(ENSDR)”与“Psychoms®”联锁引入本院。该系统的应用取得了良好的效果。然而,这一制度也存在一些问题。本研究的目的是评估使用ENSDR的现状和挑战,并指出未来的发展方向。
{"title":"Computerized electronic nursing staffs' daily records system in the “A” psychiatric hospital: Present situation and future prospects","authors":"T. Tanioka, A. Kawamura, Mai Date, K. Osaka, Yuko Yasuhara, M. Kataoka, Yukie Iwasa, Toshihiro Sugiyama, Kazuyuki Matsumoto, Tomoko Kawata, Misako Satou, K. Mifune","doi":"10.1109/NLPKE.2010.5587814","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587814","url":null,"abstract":"At the “A” psychiatric hospital, previously nurses used paper-based nursing staffs' daily records. We aimed to manage the higher quality nursing and introduced “electronic management system for nursing staffs' daily records system (ENSDR)” interlocked with “Psychoms ®” into this hospital. Some good effects were achieved by introducing this system. However, some problems have been left in this system. The purpose of this study is to evaluate the current situation and challenges which brought out by using ENSDR, and to indicate the future direction of the development.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127284984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Data selection for statistical machine translation 统计机器翻译的数据选择
Peng Liu, Yu Zhou, Chengqing Zong
The bilingual language corpus has a great effect on the performance of a statistical machine translation system. More data will lead to better performance. However, more data also increase the computational load. In this paper, we propose methods to estimate the sentence weight and select more informative sentences from the training corpus and the development corpus based on the sentence weight. The translation system is built and tuned on the compact corpus. The experimental results show that we can obtain a competitive performance with much less data.
双语语料库对统计机器翻译系统的性能有很大的影响。数据越多,性能越好。然而,更多的数据也增加了计算负荷。在本文中,我们提出了基于句子权重估计句子权重的方法,并根据句子权重从训练语料库和开发语料库中选择信息更丰富的句子。翻译系统是在紧凑语料库的基础上构建和调整的。实验结果表明,我们可以在较少的数据量下获得具有竞争力的性能。
{"title":"Data selection for statistical machine translation","authors":"Peng Liu, Yu Zhou, Chengqing Zong","doi":"10.1109/NLPKE.2010.5587827","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587827","url":null,"abstract":"The bilingual language corpus has a great effect on the performance of a statistical machine translation system. More data will lead to better performance. However, more data also increase the computational load. In this paper, we propose methods to estimate the sentence weight and select more informative sentences from the training corpus and the development corpus based on the sentence weight. The translation system is built and tuned on the compact corpus. The experimental results show that we can obtain a competitive performance with much less data.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"392 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115992325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Detection of users suspected of using multiple user accounts and manipulating evaluations in a community site 检测涉嫌使用多个用户帐户和操纵社区网站评估的用户
Naoki Ishikawa, Kenji Umemoto, Yasuhiko Watanabe, Yoshihiro Okada, Ryo Nishimura, M. Murata
Some users in a community site abuse the anonymity and attempt to manipulate communications in a community site. These users and their submissions discourage other users, keep them from retrieving good communication records, and decrease the credibility of the communication site. To solve this problem, we conducted an experimental study to detect users suspected of using multiple user accounts and manipulating evaluations in a community site. In this study, we used messages in the data of Yahoo! chiebukuro for data training and examination.
社区站点中的一些用户滥用匿名性并试图操纵社区站点中的通信。这些用户及其提交的内容阻碍了其他用户,使他们无法检索良好的通信记录,并降低了通信站点的可信度。为了解决这个问题,我们进行了一项实验研究,以检测涉嫌使用多个用户帐户并操纵社区网站评估的用户。在本研究中,我们使用了Yahoo!Chiebukuro用于数据培训和考试。
{"title":"Detection of users suspected of using multiple user accounts and manipulating evaluations in a community site","authors":"Naoki Ishikawa, Kenji Umemoto, Yasuhiko Watanabe, Yoshihiro Okada, Ryo Nishimura, M. Murata","doi":"10.1109/NLPKE.2010.5587765","DOIUrl":"https://doi.org/10.1109/NLPKE.2010.5587765","url":null,"abstract":"Some users in a community site abuse the anonymity and attempt to manipulate communications in a community site. These users and their submissions discourage other users, keep them from retrieving good communication records, and decrease the credibility of the communication site. To solve this problem, we conducted an experimental study to detect users suspected of using multiple user accounts and manipulating evaluations in a community site. In this study, we used messages in the data of Yahoo! chiebukuro for data training and examination.","PeriodicalId":259975,"journal":{"name":"Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)","volume":"243 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129773764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
Proceedings of the 6th International Conference on Natural Language Processing and Knowledge Engineering(NLPKE-2010)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1