Pub Date : 2016-11-01DOI: 10.1109/AICCSA.2016.7945817
Amina Amara, Rihem Ben Romdhane, Mohamed Ali Hadj Taieb, M. Benaouicha
The inter-social networks data represent an important source of information for several research fields like the sentiment analysis, the content propagation and the determination of influential users. The user-centered graph of the social networks designs their connection through the users' profiles. It represents the flow of the contents propagation via inter-social networks. In this paper, we present a novel structure merging the user centered graphs of different socials networks. This structure allows the simulation and the visualization of such graphs illustrating the social networks. It is designed and developed as a plug-in within the known software Gephi1. It allows the definition of the graph structure including different parameters in relation with users and their relationships, and the generation of a graph which can be visualized and handled through several treatments present in Gephi.
{"title":"Simulating the merge between user-centered graphs of social networks","authors":"Amina Amara, Rihem Ben Romdhane, Mohamed Ali Hadj Taieb, M. Benaouicha","doi":"10.1109/AICCSA.2016.7945817","DOIUrl":"https://doi.org/10.1109/AICCSA.2016.7945817","url":null,"abstract":"The inter-social networks data represent an important source of information for several research fields like the sentiment analysis, the content propagation and the determination of influential users. The user-centered graph of the social networks designs their connection through the users' profiles. It represents the flow of the contents propagation via inter-social networks. In this paper, we present a novel structure merging the user centered graphs of different socials networks. This structure allows the simulation and the visualization of such graphs illustrating the social networks. It is designed and developed as a plug-in within the known software Gephi1. It allows the definition of the graph structure including different parameters in relation with users and their relationships, and the generation of a graph which can be visualized and handled through several treatments present in Gephi.","PeriodicalId":448329,"journal":{"name":"2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131707731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-11-01DOI: 10.1109/AICCSA.2016.7945625
Mohamed Azmi, A. Berrado
Association rules mining is a data mining technique that seeks interesting associations between attributes from massive high-dimensional categorical feature spaces. However, as the dimensionality gets higher, the data gets sparser which results in the discovery of a large number of association rules and makes it difficult to understand and to interpret. In this paper, we focus on a particular type of association rules namely Class-Association Rules (CARs) and we introduce a new approach of Class-Association Rules pruning based on Lasso regularization. In this approach we propose to take advantage of variable selection ability of Lasso regularization to prune less interesting rules. The experimental analysis shows that the introduced approach gives better results than CBA in term of number as well as the quality of the obtained rules after pruning.
{"title":"Class-association rules pruning using regularization","authors":"Mohamed Azmi, A. Berrado","doi":"10.1109/AICCSA.2016.7945625","DOIUrl":"https://doi.org/10.1109/AICCSA.2016.7945625","url":null,"abstract":"Association rules mining is a data mining technique that seeks interesting associations between attributes from massive high-dimensional categorical feature spaces. However, as the dimensionality gets higher, the data gets sparser which results in the discovery of a large number of association rules and makes it difficult to understand and to interpret. In this paper, we focus on a particular type of association rules namely Class-Association Rules (CARs) and we introduce a new approach of Class-Association Rules pruning based on Lasso regularization. In this approach we propose to take advantage of variable selection ability of Lasso regularization to prune less interesting rules. The experimental analysis shows that the introduced approach gives better results than CBA in term of number as well as the quality of the obtained rules after pruning.","PeriodicalId":448329,"journal":{"name":"2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133228472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-11-01DOI: 10.1109/AICCSA.2016.8015035
A. Elyousfi, Hamza Hamout, Asma El Hachimi
The video coding standard, H.264/MPEG-4 AVC, uses variable block sizes in intra coding. This feature has achieved significant coding gain compared to coding a macroblock (MB) using fixed block size. However, this feature results in extremely high computational complexity when brute force rate distortion optimization (RDO) algorithm is used. In this paper, we propose an Efficient Intra Block Size Decision for H.264/AVC Encoding Optimization. It makes use of the spatial homogeneity characteristics of the macroblock. Specifically, spatial homogeneity of a MB is decided based on the amplitude value of the proposed MB vector. Based on the homogeneity of the macroblock, only a small number of intra prediction modes are selected in the RDO process. Different video sequences are used to test the performance of proposed method. Experimental results reveal the significant computational savings achieved with slight Peak Signal-to-Noise Ratio (PSNR) degradation and bit-rate increase.
{"title":"An efficient intra block size decision for H.264/AVC encoding optimization","authors":"A. Elyousfi, Hamza Hamout, Asma El Hachimi","doi":"10.1109/AICCSA.2016.8015035","DOIUrl":"https://doi.org/10.1109/AICCSA.2016.8015035","url":null,"abstract":"The video coding standard, H.264/MPEG-4 AVC, uses variable block sizes in intra coding. This feature has achieved significant coding gain compared to coding a macroblock (MB) using fixed block size. However, this feature results in extremely high computational complexity when brute force rate distortion optimization (RDO) algorithm is used. In this paper, we propose an Efficient Intra Block Size Decision for H.264/AVC Encoding Optimization. It makes use of the spatial homogeneity characteristics of the macroblock. Specifically, spatial homogeneity of a MB is decided based on the amplitude value of the proposed MB vector. Based on the homogeneity of the macroblock, only a small number of intra prediction modes are selected in the RDO process. Different video sequences are used to test the performance of proposed method. Experimental results reveal the significant computational savings achieved with slight Peak Signal-to-Noise Ratio (PSNR) degradation and bit-rate increase.","PeriodicalId":448329,"journal":{"name":"2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132795655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-11-01DOI: 10.1109/AICCSA.2016.7945646
Kaoru Uchida
Artifact-metrics technology is gaining more research interests, along with the expansion of its applications. One of its challenges is an efficient image search, in which a match is to be identified in a large multi-scale image database with a given, possibly distorted, query image having unknown location, orientation, and scale. To approach this computational efficiency problem of image database search, we conducted a preliminary feasibility study focused specifically on aerial photo search problem. We propose a highly efficient image search system to find a match in the multi-layered database of images with multiple magnitudes. The system first pre-selects matching candidates based on comparison results of image profiles such as frequency spectra, so that the following matching stages focus on the appropriate scale layer to accelerate search. Then in the coarse matching stage, the down-sampled query image is compared with images in a lower-magnitude layer using a scale-invariant matcher based on local feature descriptors. This paper outlines our interim proposed approach and discusses its feasibility and performance based on the experimental results from our ongoing research work.
{"title":"Efficient image search system with multi-scale database using profile-based pre-selection and coarse matching","authors":"Kaoru Uchida","doi":"10.1109/AICCSA.2016.7945646","DOIUrl":"https://doi.org/10.1109/AICCSA.2016.7945646","url":null,"abstract":"Artifact-metrics technology is gaining more research interests, along with the expansion of its applications. One of its challenges is an efficient image search, in which a match is to be identified in a large multi-scale image database with a given, possibly distorted, query image having unknown location, orientation, and scale. To approach this computational efficiency problem of image database search, we conducted a preliminary feasibility study focused specifically on aerial photo search problem. We propose a highly efficient image search system to find a match in the multi-layered database of images with multiple magnitudes. The system first pre-selects matching candidates based on comparison results of image profiles such as frequency spectra, so that the following matching stages focus on the appropriate scale layer to accelerate search. Then in the coarse matching stage, the down-sampled query image is compared with images in a lower-magnitude layer using a scale-invariant matcher based on local feature descriptors. This paper outlines our interim proposed approach and discusses its feasibility and performance based on the experimental results from our ongoing research work.","PeriodicalId":448329,"journal":{"name":"2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122521483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-11-01DOI: 10.1109/AICCSA.2016.7945726
Hiba Ramadan, H. Tairi
This paper presents a new algorithm for automatic segmentation of moving objects in video based on spatiotemporal saliency and laplacian coordinates (LC). Our algorithm exploits the saliency and the motion information to build a spatio-temporal saliency map, used to extract a moving region of interest (MRI). This region is used to provide automatically the seeds for the segmentation of the moving object using LC. Experiments show a good performance of our algorithm for moving objects segmentation in video without a user interaction, especially on Segtrack dataset.
{"title":"Moving object segmentation in video using spatiotemporal saliency and laplacian coordinates","authors":"Hiba Ramadan, H. Tairi","doi":"10.1109/AICCSA.2016.7945726","DOIUrl":"https://doi.org/10.1109/AICCSA.2016.7945726","url":null,"abstract":"This paper presents a new algorithm for automatic segmentation of moving objects in video based on spatiotemporal saliency and laplacian coordinates (LC). Our algorithm exploits the saliency and the motion information to build a spatio-temporal saliency map, used to extract a moving region of interest (MRI). This region is used to provide automatically the seeds for the segmentation of the moving object using LC. Experiments show a good performance of our algorithm for moving objects segmentation in video without a user interaction, especially on Segtrack dataset.","PeriodicalId":448329,"journal":{"name":"2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)","volume":"58 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116228625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-11-01DOI: 10.1109/AICCSA.2016.7945827
Adil Enaanai, Aziz Sdigui Doukkali, Ichrak Saif, Hicham Moutachaouik, M. Hain
Relevance is one of the most interesting topics in the information retrieval domain. In this paper, we introduce another method of relevance calculation. We propose to use the implicit opinion of users to calculate relevance. The Implicit judgment of users is injected to the documents by calculating different kinds of weighting. These latter touch several criteria like as user's weight in the query's words, user's profile, user's interest, document's content and the document popularity. In this method, each user is an active element of the system, he searches documents and he makes treatments to provide relevant information to other users in the Network. This is similar as the peer-to-peer systems; unlike that, an element (user) have to manage automatically his data by creating a short view model of his most visited documents, and calculates his relative relevance about each one. The relative relevance is variable according each user, so the final relevance is calculated by the averaging of the elementary relevance of all users. Hence, the name of collaborative relevance.
{"title":"The collaborative relevance in the distributed information retrieval","authors":"Adil Enaanai, Aziz Sdigui Doukkali, Ichrak Saif, Hicham Moutachaouik, M. Hain","doi":"10.1109/AICCSA.2016.7945827","DOIUrl":"https://doi.org/10.1109/AICCSA.2016.7945827","url":null,"abstract":"Relevance is one of the most interesting topics in the information retrieval domain. In this paper, we introduce another method of relevance calculation. We propose to use the implicit opinion of users to calculate relevance. The Implicit judgment of users is injected to the documents by calculating different kinds of weighting. These latter touch several criteria like as user's weight in the query's words, user's profile, user's interest, document's content and the document popularity. In this method, each user is an active element of the system, he searches documents and he makes treatments to provide relevant information to other users in the Network. This is similar as the peer-to-peer systems; unlike that, an element (user) have to manage automatically his data by creating a short view model of his most visited documents, and calculates his relative relevance about each one. The relative relevance is variable according each user, so the final relevance is calculated by the averaging of the elementary relevance of all users. Hence, the name of collaborative relevance.","PeriodicalId":448329,"journal":{"name":"2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115028580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-11-01DOI: 10.1109/AICCSA.2016.7945626
Abir Gorrab, Ferihane Kboubi, H. Ghézala, B. L. Grand
The emergence of social networks and the communication facilities they offer have generated an enormous informational mass. This social content is used in several research and industrial works and has had a great impact in different processes. In this paper, we present an overview of social information use in Information Retrieval (IR) and Recommendation systems. We first describe several user profile models using social information. A special attention is given to the following points: the analysis of the different user profiling models incorporating social content in Information Retrieval (IR) and in social recommendation methods. We distinguish between the models using social signals and relations, and the models using temporal information. We also present current and future challenges and research directions to enhance IR and recommendation process. We then describe our proposed model of social polarized and temporal user profile building and use in social recommendation context. Our proposal tries to address open challenges and establish a new model of user profile that fits information needs in recommender systems.
{"title":"Towards a dynamic and polarity-aware social user profile modeling","authors":"Abir Gorrab, Ferihane Kboubi, H. Ghézala, B. L. Grand","doi":"10.1109/AICCSA.2016.7945626","DOIUrl":"https://doi.org/10.1109/AICCSA.2016.7945626","url":null,"abstract":"The emergence of social networks and the communication facilities they offer have generated an enormous informational mass. This social content is used in several research and industrial works and has had a great impact in different processes. In this paper, we present an overview of social information use in Information Retrieval (IR) and Recommendation systems. We first describe several user profile models using social information. A special attention is given to the following points: the analysis of the different user profiling models incorporating social content in Information Retrieval (IR) and in social recommendation methods. We distinguish between the models using social signals and relations, and the models using temporal information. We also present current and future challenges and research directions to enhance IR and recommendation process. We then describe our proposed model of social polarized and temporal user profile building and use in social recommendation context. Our proposal tries to address open challenges and establish a new model of user profile that fits information needs in recommender systems.","PeriodicalId":448329,"journal":{"name":"2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122030605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-11-01DOI: 10.1109/AICCSA.2016.7945616
Zaki Brahmi, Sahar Mili, Rihab Derouiche
Massive data applications such as E-science applications are characterized by complex treatments on large amounts of data which need to be stored in distributed data centers. In fact, when one task needs several datasets from different data centers, moving these data may cost a lot of time and cause energy's high consumption. Moreover, when the number of the data centers involved in the execution of tasks is high, the total data movement and the execution time increase dramatically and become a bottleneck, since the data centers have a limited bandwidth. Thus, we need a good data placement strategy to minimise the data movement between data centers and reduce the energy consumed. Indeed, many researches are concerned with data placement strategy that distributes data in ways that are advantageous for application execution. In this paper, our data placement strategy aims at grouping the maximum of data and of tasks in a minimal number of data centers. It is based on the Formal Concept Analysis approach (FCA) because its notion of a concept respects our idea since it faithfully represents a group of tasks and data that are required for their execution. It is based on four steps: 1) Hierarchical organization of tasks using Formal Concepts Analysis approach, 2) Selection of candidate concepts, 3) Assigning data in the appropriate data centers and 4) Data replication. Simulations show that our strategy can effectively reduce the data movement and the average query spans compared to the genetic approach.
{"title":"Data placement strategy for massive data applications based on FCA approach","authors":"Zaki Brahmi, Sahar Mili, Rihab Derouiche","doi":"10.1109/AICCSA.2016.7945616","DOIUrl":"https://doi.org/10.1109/AICCSA.2016.7945616","url":null,"abstract":"Massive data applications such as E-science applications are characterized by complex treatments on large amounts of data which need to be stored in distributed data centers. In fact, when one task needs several datasets from different data centers, moving these data may cost a lot of time and cause energy's high consumption. Moreover, when the number of the data centers involved in the execution of tasks is high, the total data movement and the execution time increase dramatically and become a bottleneck, since the data centers have a limited bandwidth. Thus, we need a good data placement strategy to minimise the data movement between data centers and reduce the energy consumed. Indeed, many researches are concerned with data placement strategy that distributes data in ways that are advantageous for application execution. In this paper, our data placement strategy aims at grouping the maximum of data and of tasks in a minimal number of data centers. It is based on the Formal Concept Analysis approach (FCA) because its notion of a concept respects our idea since it faithfully represents a group of tasks and data that are required for their execution. It is based on four steps: 1) Hierarchical organization of tasks using Formal Concepts Analysis approach, 2) Selection of candidate concepts, 3) Assigning data in the appropriate data centers and 4) Data replication. Simulations show that our strategy can effectively reduce the data movement and the average query spans compared to the genetic approach.","PeriodicalId":448329,"journal":{"name":"2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123221622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-11-01DOI: 10.1109/AICCSA.2016.7945662
Ngoc-Tho Huynh, M. Segarra, A. Beugnard
Adaptive software is a class of software which is able to dynamically modify at run-time its own internal structure and hence its behavior in response to changes in its operating environment. Adaptive software development has been an emerging research area of software engineering in the last decade. Many existing approaches use techniques issued from software product line to develop adaptive software. They use models to specify variability and architecture of a product family and generate product architecture. These models are also used in a generation process to deduce reconfiguration actions carried out at runtime. However, the replacement of components by another ones at runtime remains a complex task since it must ensure the validity of new version, in addition to preserving the correct completion of ongoing activities. In this paper, we propose an approach to specify the necessary information at design time for identifying the best moment to reconfigure the system. Moreover, we define an adaptation mechanism to take this information and realize a consistent dynamic adaptation to guarantee the system consistency.
{"title":"Ensuring consistent dynamic adaptation: An approach from design to runtime","authors":"Ngoc-Tho Huynh, M. Segarra, A. Beugnard","doi":"10.1109/AICCSA.2016.7945662","DOIUrl":"https://doi.org/10.1109/AICCSA.2016.7945662","url":null,"abstract":"Adaptive software is a class of software which is able to dynamically modify at run-time its own internal structure and hence its behavior in response to changes in its operating environment. Adaptive software development has been an emerging research area of software engineering in the last decade. Many existing approaches use techniques issued from software product line to develop adaptive software. They use models to specify variability and architecture of a product family and generate product architecture. These models are also used in a generation process to deduce reconfiguration actions carried out at runtime. However, the replacement of components by another ones at runtime remains a complex task since it must ensure the validity of new version, in addition to preserving the correct completion of ongoing activities. In this paper, we propose an approach to specify the necessary information at design time for identifying the best moment to reconfigure the system. Moreover, we define an adaptation mechanism to take this information and realize a consistent dynamic adaptation to guarantee the system consistency.","PeriodicalId":448329,"journal":{"name":"2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124213858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-11-01DOI: 10.1109/AICCSA.2016.7945773
M. AMROUCH, M. Rabi, D. Mammass
In this paper we present a system for offline recognition cursive Arabic handwritten text based on Hidden Markov Models (HMMs). The system is analytical without explicit segmentation used embedded training to perform and enhance the character models. Extraction features preceded by baseline estimation are statistical and geometric to integrate both the peculiarities of the text and the pixel distribution characteristics in the word image. These features are modelled using hidden Markov models and trained by embedded training. The experiments on images of the benchmark IFN/ENIT database show that the proposed system improves recognition.
{"title":"An improved Arabic handwritten recognition system using embedded training based on HMMs","authors":"M. AMROUCH, M. Rabi, D. Mammass","doi":"10.1109/AICCSA.2016.7945773","DOIUrl":"https://doi.org/10.1109/AICCSA.2016.7945773","url":null,"abstract":"In this paper we present a system for offline recognition cursive Arabic handwritten text based on Hidden Markov Models (HMMs). The system is analytical without explicit segmentation used embedded training to perform and enhance the character models. Extraction features preceded by baseline estimation are statistical and geometric to integrate both the peculiarities of the text and the pixel distribution characteristics in the word image. These features are modelled using hidden Markov models and trained by embedded training. The experiments on images of the benchmark IFN/ENIT database show that the proposed system improves recognition.","PeriodicalId":448329,"journal":{"name":"2016 IEEE/ACS 13th International Conference of Computer Systems and Applications (AICCSA)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125381388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}