波斯语句子中空格的标记化校正

Mahnaz Panahandeh, Shirin Ghanbari
{"title":"波斯语句子中空格的标记化校正","authors":"Mahnaz Panahandeh, Shirin Ghanbari","doi":"10.1109/KBEI.2019.8734954","DOIUrl":null,"url":null,"abstract":"The exponential growth of the Internet and its users and the emergence of Web 2.0 have caused a large volume of textual data to be created. Automatic analysis of such data can be used in making decisions. As online text is created by different producers with different styles of writing, pre-processing is a necessity prior to any processes related to natural language tasks. An essential part of textual preprocessing prior to the recognition of the word vocabulary is normalization, which includes the correction of spaces that particularly in the Persian language this includes both full-spaces between words and half-spaces. Through the review of user comments within social media services, it can be seen that in many cases users do not adhere to grammatical rules of inserting both forms of spaces, which increases the complexity of the identification of words and henceforth, reducing the accuracy of further processing on the text. In this study, current issues in the normalization and tokenization of preprocessing tools within the Persian language and essentially identifying and correcting the separation of words are and the correction of spaces are proposed. The results obtained and compared to leading preprocessing tools highlight the significance of the proposed methodology.","PeriodicalId":339990,"journal":{"name":"2019 5th Conference on Knowledge Based Engineering and Innovation (KBEI)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Correction of Spaces in Persian Sentences for Tokenization\",\"authors\":\"Mahnaz Panahandeh, Shirin Ghanbari\",\"doi\":\"10.1109/KBEI.2019.8734954\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The exponential growth of the Internet and its users and the emergence of Web 2.0 have caused a large volume of textual data to be created. Automatic analysis of such data can be used in making decisions. As online text is created by different producers with different styles of writing, pre-processing is a necessity prior to any processes related to natural language tasks. An essential part of textual preprocessing prior to the recognition of the word vocabulary is normalization, which includes the correction of spaces that particularly in the Persian language this includes both full-spaces between words and half-spaces. Through the review of user comments within social media services, it can be seen that in many cases users do not adhere to grammatical rules of inserting both forms of spaces, which increases the complexity of the identification of words and henceforth, reducing the accuracy of further processing on the text. In this study, current issues in the normalization and tokenization of preprocessing tools within the Persian language and essentially identifying and correcting the separation of words are and the correction of spaces are proposed. The results obtained and compared to leading preprocessing tools highlight the significance of the proposed methodology.\",\"PeriodicalId\":339990,\"journal\":{\"name\":\"2019 5th Conference on Knowledge Based Engineering and Innovation (KBEI)\",\"volume\":\"88 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 5th Conference on Knowledge Based Engineering and Innovation (KBEI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/KBEI.2019.8734954\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 5th Conference on Knowledge Based Engineering and Innovation (KBEI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/KBEI.2019.8734954","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

Internet及其用户的指数级增长以及Web 2.0的出现导致了大量文本数据的创建。对这些数据的自动分析可以用于决策。由于在线文本是由不同的创作者以不同的写作风格创建的,因此预处理是与自然语言任务相关的任何过程之前的必要条件。在识别单词词汇表之前,文本预处理的一个重要部分是规范化,它包括空格的纠正,特别是在波斯语中,这包括单词之间的全空格和半空格。通过对社交媒体服务中的用户评论的回顾,可以看到,在很多情况下,用户并没有遵守插入两种形式的空格的语法规则,这增加了单词识别的复杂性,从而降低了对文本进一步处理的准确性。在本研究中,提出了波斯语中预处理工具规范化和标记化的当前问题,并从本质上识别和纠正词的分离和空格的纠正。得到的结果并与领先的预处理工具进行了比较,突出了所提出方法的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Correction of Spaces in Persian Sentences for Tokenization
The exponential growth of the Internet and its users and the emergence of Web 2.0 have caused a large volume of textual data to be created. Automatic analysis of such data can be used in making decisions. As online text is created by different producers with different styles of writing, pre-processing is a necessity prior to any processes related to natural language tasks. An essential part of textual preprocessing prior to the recognition of the word vocabulary is normalization, which includes the correction of spaces that particularly in the Persian language this includes both full-spaces between words and half-spaces. Through the review of user comments within social media services, it can be seen that in many cases users do not adhere to grammatical rules of inserting both forms of spaces, which increases the complexity of the identification of words and henceforth, reducing the accuracy of further processing on the text. In this study, current issues in the normalization and tokenization of preprocessing tools within the Persian language and essentially identifying and correcting the separation of words are and the correction of spaces are proposed. The results obtained and compared to leading preprocessing tools highlight the significance of the proposed methodology.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Profitability Prediction for ATM Transactions Using Artificial Neural Networks: A Data-Driven Analysis Fabrication of UV detector by Schottky Pd/ZnO/Si Contacts Hybrid of genetic algorithm and krill herd for software clustering problem Development of a Hybrid Bayesian Network Model for Hydraulic Simulation of Agricultural Water Distribution and Delivery Using SIFT Descriptors for Face Recognition Based on Neural Network and Kepenekci Approach
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1