Pub Date : 2013-11-01DOI: 10.1109/ICCES.2013.6707189
M. S. Abou Omar, T. Khedr, B. A. Abou Zalam
PID controllers with fixed parameters cannot produce satisfactory results for systems with nonlinear or complex characteristics. Fuzzy Supervisory control (FSC) is a proper method to modify the PIP controller to form a nonlinear Self-Tuning Fuzzy PID controller. In this type of controllers, the fuzzy supervisory controller placed in the upper level, makes the supervisory decision to the PID controller placed in the lower level. The supervisory fuzzy rule set is used for on-line tuning of the PID controller to achieve better performance resulting in an adaptive controller. The main drawback of fuzzy logic control (FLC) is that, the design becomes more difficult and very time consuming when the number of its inputs and outputs is increased such as in case of FSC. Also, the fuzzy rule bases are dependent on the characteristics of the controlled plant and were determined from the practical experience. This paper introduces a method for designing fuzzy supervisory controller using particle swarm optimization technique, to obtain the optimal rule base, scaling factors, membership function parameters and the optimal range for tuning Kp, Ki and Kd of the PID controller, placed in the forward control loop of a nonlinear DC motor position control system including backlash nonlinearity.
{"title":"Particle swarm optimization of fuzzy supervisory controller for nonlinear position control system","authors":"M. S. Abou Omar, T. Khedr, B. A. Abou Zalam","doi":"10.1109/ICCES.2013.6707189","DOIUrl":"https://doi.org/10.1109/ICCES.2013.6707189","url":null,"abstract":"PID controllers with fixed parameters cannot produce satisfactory results for systems with nonlinear or complex characteristics. Fuzzy Supervisory control (FSC) is a proper method to modify the PIP controller to form a nonlinear Self-Tuning Fuzzy PID controller. In this type of controllers, the fuzzy supervisory controller placed in the upper level, makes the supervisory decision to the PID controller placed in the lower level. The supervisory fuzzy rule set is used for on-line tuning of the PID controller to achieve better performance resulting in an adaptive controller. The main drawback of fuzzy logic control (FLC) is that, the design becomes more difficult and very time consuming when the number of its inputs and outputs is increased such as in case of FSC. Also, the fuzzy rule bases are dependent on the characteristics of the controlled plant and were determined from the practical experience. This paper introduces a method for designing fuzzy supervisory controller using particle swarm optimization technique, to obtain the optimal rule base, scaling factors, membership function parameters and the optimal range for tuning Kp, Ki and Kd of the PID controller, placed in the forward control loop of a nonlinear DC motor position control system including backlash nonlinearity.","PeriodicalId":277807,"journal":{"name":"2013 8th International Conference on Computer Engineering & Systems (ICCES)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115251525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCES.2013.6707162
Mona Nagy Elbedwehy, M. E. Ghoneim, A. Hassanien
The field of artificial intelligence embraces two approaches to artificial learning. The first is motivated by the study of mental processes and states that artificial learning is the study of mechanisms embodied in the human mind. It aims to understand how these mechanisms can be translated into computer programs. The second approach initiated from a practical computing standpoint and has less grandiose aims. It involves developing programs that learn from past data, and may be considered as a branch of data processing. In this paper, we are concerned with the first approach. Artificial learning is interested in the classification learning that is a learning algorithm for categorizing unseen examples into predefined classes based on a set of training examples. We formulated a computational model for binary classification process using formal concept analysis. The classification rules are derived and applied successfully for different study cases.
{"title":"Computational model for artificial learning using fonnal concept analysis","authors":"Mona Nagy Elbedwehy, M. E. Ghoneim, A. Hassanien","doi":"10.1109/ICCES.2013.6707162","DOIUrl":"https://doi.org/10.1109/ICCES.2013.6707162","url":null,"abstract":"The field of artificial intelligence embraces two approaches to artificial learning. The first is motivated by the study of mental processes and states that artificial learning is the study of mechanisms embodied in the human mind. It aims to understand how these mechanisms can be translated into computer programs. The second approach initiated from a practical computing standpoint and has less grandiose aims. It involves developing programs that learn from past data, and may be considered as a branch of data processing. In this paper, we are concerned with the first approach. Artificial learning is interested in the classification learning that is a learning algorithm for categorizing unseen examples into predefined classes based on a set of training examples. We formulated a computational model for binary classification process using formal concept analysis. The classification rules are derived and applied successfully for different study cases.","PeriodicalId":277807,"journal":{"name":"2013 8th International Conference on Computer Engineering & Systems (ICCES)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123346012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCES.2013.6707225
Azza Higazy, Tarek E. El. Tobely, A. Yousef, A. Sarhan
Data accuracy and quality affects the success of any business intelligence and data mining solutions. The first step to ensure the data accuracy is to make sure that each real world object is represented once and only once in a certain dataset, this operation becomes more complicated when entities are identified by a string value like the case of person names. These data inaccuracy problems exist due to misspelling and wide range of typographical variations especially with non-Latin languages like Arabic. Up to authors' knowledge, the previously proposed duplicate record detection (DRD) algorithms and frameworks do not support Arabic language and have some configuration difficulties. In this paper an English/Arabic enabled web-based framework is designed and implemented, considering the wide range variations in Arabic language. Improved indexing/blocking techniques used to allow fast processing. The framework is implemented and verified by several case studies. Results showed that the framework has substantial improvements compared to known techniques.
{"title":"Web-based Arabic/English duplicate record detection with nested blocking technique","authors":"Azza Higazy, Tarek E. El. Tobely, A. Yousef, A. Sarhan","doi":"10.1109/ICCES.2013.6707225","DOIUrl":"https://doi.org/10.1109/ICCES.2013.6707225","url":null,"abstract":"Data accuracy and quality affects the success of any business intelligence and data mining solutions. The first step to ensure the data accuracy is to make sure that each real world object is represented once and only once in a certain dataset, this operation becomes more complicated when entities are identified by a string value like the case of person names. These data inaccuracy problems exist due to misspelling and wide range of typographical variations especially with non-Latin languages like Arabic. Up to authors' knowledge, the previously proposed duplicate record detection (DRD) algorithms and frameworks do not support Arabic language and have some configuration difficulties. In this paper an English/Arabic enabled web-based framework is designed and implemented, considering the wide range variations in Arabic language. Improved indexing/blocking techniques used to allow fast processing. The framework is implemented and verified by several case studies. Results showed that the framework has substantial improvements compared to known techniques.","PeriodicalId":277807,"journal":{"name":"2013 8th International Conference on Computer Engineering & Systems (ICCES)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128850131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCES.2013.6707212
M. Salem, Abdelhameed Ibrahim, H. Ali
This paper develops a segmentation method using an automatic quick-shift method based on illumination invariant representation of color images. The proposed method segments images into homogeneous regions by applying the quick-shift method with initial parameters, and then automatically gets the final segmented image by changing the quick-shift parameters values. This method is valid for large size images. A quantization process is applied to the invariant image to be used as a reference image. Changing parameters values in iterations instead of using a specific value made the proposed algorithm flexible and robust against different image characteristics. The effectiveness of the proposed method for a variety of images including different objects of metals and dielectrics are examined in experiments by using our color imaging system.
{"title":"Automatic quick-shift method for color image segmentation","authors":"M. Salem, Abdelhameed Ibrahim, H. Ali","doi":"10.1109/ICCES.2013.6707212","DOIUrl":"https://doi.org/10.1109/ICCES.2013.6707212","url":null,"abstract":"This paper develops a segmentation method using an automatic quick-shift method based on illumination invariant representation of color images. The proposed method segments images into homogeneous regions by applying the quick-shift method with initial parameters, and then automatically gets the final segmented image by changing the quick-shift parameters values. This method is valid for large size images. A quantization process is applied to the invariant image to be used as a reference image. Changing parameters values in iterations instead of using a specific value made the proposed algorithm flexible and robust against different image characteristics. The effectiveness of the proposed method for a variety of images including different objects of metals and dielectrics are examined in experiments by using our color imaging system.","PeriodicalId":277807,"journal":{"name":"2013 8th International Conference on Computer Engineering & Systems (ICCES)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125250358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCES.2013.6707203
P. Malík
An FPGA-based hardware architecture for arithmetic mean filtration optimized with 49-pixel square neighborhood is proposed. The arithmetic mean formula is optimized and transformed into the new formula that introduces the computational cyclic sequence which results in multiplication-less process with only 9 additions necessary for each pixel. The external memory is used to save partial results but the memory requirement has been optimized so the requirement is the same as for the input data. This proposed architecture is oriented to security tracking applications; however, it can be used in any image processing applications that use arithmetic mean filtering. It is resolution and frame rate independent and suitable for all high resolution and multiple camera systems. FPGA optimization made it also suitable for FPGA-based reconfigurable systems and computing.
{"title":"Hardware architecture dedicated for arithmetic mean filtration implemented in FPGA","authors":"P. Malík","doi":"10.1109/ICCES.2013.6707203","DOIUrl":"https://doi.org/10.1109/ICCES.2013.6707203","url":null,"abstract":"An FPGA-based hardware architecture for arithmetic mean filtration optimized with 49-pixel square neighborhood is proposed. The arithmetic mean formula is optimized and transformed into the new formula that introduces the computational cyclic sequence which results in multiplication-less process with only 9 additions necessary for each pixel. The external memory is used to save partial results but the memory requirement has been optimized so the requirement is the same as for the input data. This proposed architecture is oriented to security tracking applications; however, it can be used in any image processing applications that use arithmetic mean filtering. It is resolution and frame rate independent and suitable for all high resolution and multiple camera systems. FPGA optimization made it also suitable for FPGA-based reconfigurable systems and computing.","PeriodicalId":277807,"journal":{"name":"2013 8th International Conference on Computer Engineering & Systems (ICCES)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114709053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCES.2013.6707175
S. Darwish, S. Guirguis, Mahmoud M. Ghozlan
Most of valuable information resources for all organizations are stored in database. It's a serious subject to protect this information against intruders. However, conventional security mechanisms haven't been designed to detect anomalous actions of database users. Intrusion detection systems (IDS) deliver an extra layer of security that cannot be guaranteed by built-in security tools. IDS provide the ideal solution to defend databases from intruders. In this paper, we suggest an anomaly detection approach that summarizes the raw transactional SQL queries into compact data structure called hexplet, which can model normal database access behavior (abstract the user's role profile) and recognize impostors specifically tailored for role-based access control (RBAC) database system. This hexplet allows us to preserve the correlation among SQL statements in the same transaction by exploiting the information in the transaction-log entry. Our target is to improve detection accuracy, specially the detection of those intruders inside the organization who behave strange behavior. Our model utilizes Naive Bayes Classifier (NBC) as a simple technique for evaluating the legitimacy of transaction. Experimental results show the performance of the proposed model in the term of error equal rate.
{"title":"Intrusion detection in role administrated database: Transaction-based approach","authors":"S. Darwish, S. Guirguis, Mahmoud M. Ghozlan","doi":"10.1109/ICCES.2013.6707175","DOIUrl":"https://doi.org/10.1109/ICCES.2013.6707175","url":null,"abstract":"Most of valuable information resources for all organizations are stored in database. It's a serious subject to protect this information against intruders. However, conventional security mechanisms haven't been designed to detect anomalous actions of database users. Intrusion detection systems (IDS) deliver an extra layer of security that cannot be guaranteed by built-in security tools. IDS provide the ideal solution to defend databases from intruders. In this paper, we suggest an anomaly detection approach that summarizes the raw transactional SQL queries into compact data structure called hexplet, which can model normal database access behavior (abstract the user's role profile) and recognize impostors specifically tailored for role-based access control (RBAC) database system. This hexplet allows us to preserve the correlation among SQL statements in the same transaction by exploiting the information in the transaction-log entry. Our target is to improve detection accuracy, specially the detection of those intruders inside the organization who behave strange behavior. Our model utilizes Naive Bayes Classifier (NBC) as a simple technique for evaluating the legitimacy of transaction. Experimental results show the performance of the proposed model in the term of error equal rate.","PeriodicalId":277807,"journal":{"name":"2013 8th International Conference on Computer Engineering & Systems (ICCES)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122115638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCES.2013.6707190
M. Hamdy, I. Hamdan, M. Ibrahim
In this paper, a non-fragile bilinear state feedback controller has been designed for multi-input multi-output (MIMO) bilinear systems. The controller fragility can be described as the sensitivity of the controller to variations in controller parameters to avoid any implementation error happens, the stability conditions of the overall closed loop MIMO bilinear systems in the presence of the additive controller gain perturbations are formulated in terms of Lyapunov via linear matrix inequality (LMI) and the gain of each non-fragile controller will be calculated by solving a set of (LMI). Finally, an application example of a Headbox control system of paper-making machine is used to illustrate the applicability of the proposed method.
{"title":"Non-fragile bilinear state feedback controller for a class of MIMO bilinear systems","authors":"M. Hamdy, I. Hamdan, M. Ibrahim","doi":"10.1109/ICCES.2013.6707190","DOIUrl":"https://doi.org/10.1109/ICCES.2013.6707190","url":null,"abstract":"In this paper, a non-fragile bilinear state feedback controller has been designed for multi-input multi-output (MIMO) bilinear systems. The controller fragility can be described as the sensitivity of the controller to variations in controller parameters to avoid any implementation error happens, the stability conditions of the overall closed loop MIMO bilinear systems in the presence of the additive controller gain perturbations are formulated in terms of Lyapunov via linear matrix inequality (LMI) and the gain of each non-fragile controller will be calculated by solving a set of (LMI). Finally, an application example of a Headbox control system of paper-making machine is used to illustrate the applicability of the proposed method.","PeriodicalId":277807,"journal":{"name":"2013 8th International Conference on Computer Engineering & Systems (ICCES)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114749820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCES.2013.6707199
Ali Ahmed Ali Ali Khalil, El Sayed Mostafa Saad, Mostafa Abd El-Nabi, F. A. Abd El-Samie
This paper presents a study for speaker recognition of the speech signals transmitted through Bluetooth channel as degraded speech signals, while the training phase is made with clean speech signals. This is based on the Mel-Frequency Cepstral Coefficients (MFCCs) for feature extraction from the speech signals. Different approaches for feature extractions are tested in the paper; feature extraction from the signals, feature extraction from the Discrete Cosine Transform (DCT) of signals, feature extraction from the signals and the DCT, feature extraction from the Discrete Sine Transform (DST) of signals, feature extraction from the signals and the DST, feature extraction from the Discrete Wavelet Transform (DWT) of signals, and finally feature extraction from the signals and the DWT. A Neural Network (NN) classifier is used in the simulation experiments. Simulation results show that feature extraction from the DCT of signals achieves the highest recognition rates.
{"title":"Efficient speaker identification from speech transmitted over bluetooth based system","authors":"Ali Ahmed Ali Ali Khalil, El Sayed Mostafa Saad, Mostafa Abd El-Nabi, F. A. Abd El-Samie","doi":"10.1109/ICCES.2013.6707199","DOIUrl":"https://doi.org/10.1109/ICCES.2013.6707199","url":null,"abstract":"This paper presents a study for speaker recognition of the speech signals transmitted through Bluetooth channel as degraded speech signals, while the training phase is made with clean speech signals. This is based on the Mel-Frequency Cepstral Coefficients (MFCCs) for feature extraction from the speech signals. Different approaches for feature extractions are tested in the paper; feature extraction from the signals, feature extraction from the Discrete Cosine Transform (DCT) of signals, feature extraction from the signals and the DCT, feature extraction from the Discrete Sine Transform (DST) of signals, feature extraction from the signals and the DST, feature extraction from the Discrete Wavelet Transform (DWT) of signals, and finally feature extraction from the signals and the DWT. A Neural Network (NN) classifier is used in the simulation experiments. Simulation results show that feature extraction from the DCT of signals achieves the highest recognition rates.","PeriodicalId":277807,"journal":{"name":"2013 8th International Conference on Computer Engineering & Systems (ICCES)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132409761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCES.2013.6707215
M. Al-Berry, M. A. Salem, A. S. Hussein, M. Tolba
Detecting and tracking moving objects in complicated real world scenes is a fundamental component for a wide variety of applications, including intelligent surveillance, advanced robotics, and human computer interaction. Based on this fundamental step, the subsequent processing is shaped up. Many standard algorithms are known for detecting moving objects, with different performances and time complexities, including optical flow, background subtraction, frame difference and wavelet filters. Existing frame differencing has a limited capability in detecting slowly moving objects, especially in the presence of illumination variations. In this paper, an innovative technique is proposed for the detection of moving objects in scenes with non-uniform illumination. The proposed technique is based on the idea of accumulative frame differencing and is enhanced using 2-D Discrete Wavelet Transform (DWT). Evaluation and comparison of the proposed technique with the different existing ones demonstrate the efficiency of using the 2-D DWT in the process of motion detection.
{"title":"Motion detection using wavelet-enhanced accumulative frame differencing","authors":"M. Al-Berry, M. A. Salem, A. S. Hussein, M. Tolba","doi":"10.1109/ICCES.2013.6707215","DOIUrl":"https://doi.org/10.1109/ICCES.2013.6707215","url":null,"abstract":"Detecting and tracking moving objects in complicated real world scenes is a fundamental component for a wide variety of applications, including intelligent surveillance, advanced robotics, and human computer interaction. Based on this fundamental step, the subsequent processing is shaped up. Many standard algorithms are known for detecting moving objects, with different performances and time complexities, including optical flow, background subtraction, frame difference and wavelet filters. Existing frame differencing has a limited capability in detecting slowly moving objects, especially in the presence of illumination variations. In this paper, an innovative technique is proposed for the detection of moving objects in scenes with non-uniform illumination. The proposed technique is based on the idea of accumulative frame differencing and is enhanced using 2-D Discrete Wavelet Transform (DWT). Evaluation and comparison of the proposed technique with the different existing ones demonstrate the efficiency of using the 2-D DWT in the process of motion detection.","PeriodicalId":277807,"journal":{"name":"2013 8th International Conference on Computer Engineering & Systems (ICCES)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132863493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-11-01DOI: 10.1109/ICCES.2013.6707161
Yasser Hifny
Arabic script can be written with diacritics or without diacritics. In normal situation, Arabic text is written without the diacritics (e.g. Arabic newspapers). When the diacritics are present, the Arabic script provides enough information about the correct pronunciation and the meaning of the words. Assigning the correct diacritics to Arabic words is a complex task implying morphology, syntax, and semantic processing. The goal of this research is to develop an automatic system to assign diacritics to Arabic words. The presented technique is purely statistical approach and depends only on an Arabic corpus annotated with diacritics. In this paper, we present an algorithm to restore Arabic diacritics using dynamic programming approach. The possible word sequences with diacritics are assigned scores using statistical n-gram language modeling approach. Using the assigned scores, it is possible to search the most likely sequence using a dynamic programming algorithm. When case ending is ignored (i.e the diacritic mark of last letter), preliminary results on a public domain corpus show that the algorithm can lead to good results.
{"title":"Restoration of Arabic diacritics using dynamic programming","authors":"Yasser Hifny","doi":"10.1109/ICCES.2013.6707161","DOIUrl":"https://doi.org/10.1109/ICCES.2013.6707161","url":null,"abstract":"Arabic script can be written with diacritics or without diacritics. In normal situation, Arabic text is written without the diacritics (e.g. Arabic newspapers). When the diacritics are present, the Arabic script provides enough information about the correct pronunciation and the meaning of the words. Assigning the correct diacritics to Arabic words is a complex task implying morphology, syntax, and semantic processing. The goal of this research is to develop an automatic system to assign diacritics to Arabic words. The presented technique is purely statistical approach and depends only on an Arabic corpus annotated with diacritics. In this paper, we present an algorithm to restore Arabic diacritics using dynamic programming approach. The possible word sequences with diacritics are assigned scores using statistical n-gram language modeling approach. Using the assigned scores, it is possible to search the most likely sequence using a dynamic programming algorithm. When case ending is ignored (i.e the diacritic mark of last letter), preliminary results on a public domain corpus show that the algorithm can lead to good results.","PeriodicalId":277807,"journal":{"name":"2013 8th International Conference on Computer Engineering & Systems (ICCES)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130420331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}