Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5647958
Cheragui Mohamed Amine, Hoceini Youssef, Abbas Moncef
In this paper, we present our work on Arabic morphology and especially the mechanisms for resolving the morphological ambiguity in Arabic text. These researches, which have given birth to TAGHIT system which is a morphosyntactic tagger for Arabic, where the originality of our work lies in the implementation of our internal system of a new approach to disambiguation different from those that currently exist, which is based on the principles and techniques issued from multicriteria decision making.
{"title":"A morphological analysis of Arabic language based on multicriteria decision making: TAGHIT system","authors":"Cheragui Mohamed Amine, Hoceini Youssef, Abbas Moncef","doi":"10.1109/ICMWI.2010.5647958","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647958","url":null,"abstract":"In this paper, we present our work on Arabic morphology and especially the mechanisms for resolving the morphological ambiguity in Arabic text. These researches, which have given birth to TAGHIT system which is a morphosyntactic tagger for Arabic, where the originality of our work lies in the implementation of our internal system of a new approach to disambiguation different from those that currently exist, which is based on the principles and techniques issued from multicriteria decision making.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125715099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5648076
Souhila Sadeg, Mohamed Gougache, N. Mansouri, H. Drias
DNA cryptography is a new promising direction in cryptography research that emerged with the progress in DNA computing field. DNA can be used not only to store and transmit information, but also to perform computations. The massive parallelism and extraordinary information density inherent in this molecule are exploited for cryptographic purposes, and several DNA based algorithms are proposed for encryption, authentification and so on. The current main difficulties of DNA cryptography are the absence of theoretical basis, the high tech lab requirements and computation limitations. In this paper, a symmetric key bloc cipher algorithm is proposed. It includes a step that simulates ideas from the processes of transcription (transfer from DNA to mRNA) and translation (from mRNA into amino acids). This algorithm is, we believe, efficient in computation and very secure, since it was designed following recommendations of experts in cryptography and focuses on the application of the fundamental principles of Shannon: Confusion and diffusion. Tests were conducted and the results are very satisfactory.
{"title":"An encryption algorithm inspired from DNA","authors":"Souhila Sadeg, Mohamed Gougache, N. Mansouri, H. Drias","doi":"10.1109/ICMWI.2010.5648076","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648076","url":null,"abstract":"DNA cryptography is a new promising direction in cryptography research that emerged with the progress in DNA computing field. DNA can be used not only to store and transmit information, but also to perform computations. The massive parallelism and extraordinary information density inherent in this molecule are exploited for cryptographic purposes, and several DNA based algorithms are proposed for encryption, authentification and so on. The current main difficulties of DNA cryptography are the absence of theoretical basis, the high tech lab requirements and computation limitations. In this paper, a symmetric key bloc cipher algorithm is proposed. It includes a step that simulates ideas from the processes of transcription (transfer from DNA to mRNA) and translation (from mRNA into amino acids). This algorithm is, we believe, efficient in computation and very secure, since it was designed following recommendations of experts in cryptography and focuses on the application of the fundamental principles of Shannon: Confusion and diffusion. Tests were conducted and the results are very satisfactory.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127151538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5648011
H. Faycal, D. Habiba, Mellah Hakima
This paper presents an approach based on multi-agent system (MAS) for encapsulating the features of traditional applications also called legacy systems. We focus our interest particularly on legacy based on Comment Object Request Broker Architecture (CORBA) technology. The encapsulation main objective is to simplify the possibilities for integrating this kind of application in a service-oriented architecture (SOA). We design an interface using Java Agent DEvelopment framework (JADE), which enables automatic generation of code for CORBA clients and ontology classes. The proposed system creates for each feature to wrap a representative agent, and allows the composition of functions according to predefined templates. The system uses the Web Service Integration Gateway (WSIG) to publish capacities of representative agents as web services that will be used in a SOA.
{"title":"Integrating legacy systems in a SOA using an agent based approach for information system agility","authors":"H. Faycal, D. Habiba, Mellah Hakima","doi":"10.1109/ICMWI.2010.5648011","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648011","url":null,"abstract":"This paper presents an approach based on multi-agent system (MAS) for encapsulating the features of traditional applications also called legacy systems. We focus our interest particularly on legacy based on Comment Object Request Broker Architecture (CORBA) technology. The encapsulation main objective is to simplify the possibilities for integrating this kind of application in a service-oriented architecture (SOA). We design an interface using Java Agent DEvelopment framework (JADE), which enables automatic generation of code for CORBA clients and ontology classes. The proposed system creates for each feature to wrap a representative agent, and allows the composition of functions according to predefined templates. The system uses the Web Service Integration Gateway (WSIG) to publish capacities of representative agents as web services that will be used in a SOA.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117057208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5647887
Mohamed Redha Djebbara, H. Belbachir
The data replication is a very important technique for the availability of data in the grids. One of the challenges in data replication is the replicas placement. In this paper, we present our contribution by proposing a replicas placement strategy in a hierarchical grid. Our approach is based on a dynamic threshold, contrary to the other strategies of replicas placement which use a static threshold. In our strategy we show that the threshold depends on several factors such as the size of the data to be replicated, the consumed bandwidth which is explained by the level of the tree related to the grid.
{"title":"Dynamic threshold for replicas placement strategy","authors":"Mohamed Redha Djebbara, H. Belbachir","doi":"10.1109/ICMWI.2010.5647887","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647887","url":null,"abstract":"The data replication is a very important technique for the availability of data in the grids. One of the challenges in data replication is the replicas placement. In this paper, we present our contribution by proposing a replicas placement strategy in a hierarchical grid. Our approach is based on a dynamic threshold, contrary to the other strategies of replicas placement which use a static threshold. In our strategy we show that the threshold depends on several factors such as the size of the data to be replicated, the consumed bandwidth which is explained by the level of the tree related to the grid.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124624386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5647851
George Adam, C. Bouras, V. Poulopoulos
The expansion of the World Wide Web has led to a state where a vast amount of Internet users face and have to overcome the major problem of discovering desired information. It is inevitable that hundreds of web pages and weblogs are generated daily or changing on a daily basis. The main problem that arises from the continuous generation and alteration of web pages is the discovery of useful information, a task that becomes difficult even for the experienced internet users. Many mechanisms have been constructed and presented in order to overcome the puzzle of information discovery on the Internet and they are mostly based on crawlers which are browsing the WWW, downloading pages and collect the information that might be of user interest. In this manuscript we describe a mechanism that fetches web pages that include news articles from major news portals and blogs. This mechanism is constructed in order to support tools that are used to acquire news articles from all over the world, process them and present them back to the end users in a personalized manner.
{"title":"Efficient extraction of news articles based on RSS crawling","authors":"George Adam, C. Bouras, V. Poulopoulos","doi":"10.1109/ICMWI.2010.5647851","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647851","url":null,"abstract":"The expansion of the World Wide Web has led to a state where a vast amount of Internet users face and have to overcome the major problem of discovering desired information. It is inevitable that hundreds of web pages and weblogs are generated daily or changing on a daily basis. The main problem that arises from the continuous generation and alteration of web pages is the discovery of useful information, a task that becomes difficult even for the experienced internet users. Many mechanisms have been constructed and presented in order to overcome the puzzle of information discovery on the Internet and they are mostly based on crawlers which are browsing the WWW, downloading pages and collect the information that might be of user interest. In this manuscript we describe a mechanism that fetches web pages that include news articles from major news portals and blogs. This mechanism is constructed in order to support tools that are used to acquire news articles from all over the world, process them and present them back to the end users in a personalized manner.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127040545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5647909
Kahina Achour, Louiza Slaouti, D. Boughaci
Grid computing is an innovative approach permitting the use of computing resources which are far apart and connected by Wide Area Networks. This recent technology has become extremely popular to optimize computing resources and manage data and computing workloads. The aim of this paper is to propose a metacomputing approach for the winner determination problem in combinatorial auctions (WDP). The proposed approach is a hybrid genetic algorithm adapted to the WDP and implemented on a grid computing platform.
{"title":"A metacomputing approach for the winner determination problem in combinatorial auctions","authors":"Kahina Achour, Louiza Slaouti, D. Boughaci","doi":"10.1109/ICMWI.2010.5647909","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647909","url":null,"abstract":"Grid computing is an innovative approach permitting the use of computing resources which are far apart and connected by Wide Area Networks. This recent technology has become extremely popular to optimize computing resources and manage data and computing workloads. The aim of this paper is to propose a metacomputing approach for the winner determination problem in combinatorial auctions (WDP). The proposed approach is a hybrid genetic algorithm adapted to the WDP and implemented on a grid computing platform.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131113985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5648157
Rasha Al Dam, A. Guessoum
This paper presents a Transfer Module for an English-to-Arabic Machine Translation System (MTS) using an English-to-Arabic Bilingual Corpus. We propose an approach to build a transfer module by building a new transfer-based system for machine translation using Artificial Neural Networks (ANN). The idea is to allow the ANN-based transfer module to automatically learn correspondences between source and target language structures using a large set of English sentences and their Arabic translations. The paper presents the methodology for corpus building. It then introduces the approach that has been followed to develop the transfer module. It finally presents the experimental results which are very encouraging.
{"title":"Building a neural network-based English-to-Arabic transfer module from an unrestricted domain","authors":"Rasha Al Dam, A. Guessoum","doi":"10.1109/ICMWI.2010.5648157","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648157","url":null,"abstract":"This paper presents a Transfer Module for an English-to-Arabic Machine Translation System (MTS) using an English-to-Arabic Bilingual Corpus. We propose an approach to build a transfer module by building a new transfer-based system for machine translation using Artificial Neural Networks (ANN). The idea is to allow the ANN-based transfer module to automatically learn correspondences between source and target language structures using a large set of English sentences and their Arabic translations. The paper presents the methodology for corpus building. It then introduces the approach that has been followed to develop the transfer module. It finally presents the experimental results which are very encouraging.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127455357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5648051
F. Harrag, A. Al-Salman, Mohammed Benmohammed
In this paper, we present a model based on the Neural Network (NN) for classifying Arabic texts. We propose the use of Singular Value Decomposition (SVD) as a preprocessor of NN with the aim of further reducing data in terms of both size and dimensionality. Indeed, the use of SVD makes data more amenable to classification and the convergence training process faster. Specifically, the effectiveness of the Multilayer Perceptron (MLP) and the Radial Basis Function (RBF) classifiers are implemented. Experiments are conducted using an in-house corpus of Arabic texts. Precision, recall and F-measure are used to quantify categorization effectiveness. The results show that the proposed SVD-Supported MLP/RBF ANN classifier is able to achieve high effectiveness. Experimental results also show that the MLP classifier outperforms the RBF classifier and that the SVD-supported NN classifier is better than the basic NN, as far as Arabic text categorization is concerned.
{"title":"A comparative study of Neural networks architectures on Arabic text categorization using feature extraction","authors":"F. Harrag, A. Al-Salman, Mohammed Benmohammed","doi":"10.1109/ICMWI.2010.5648051","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648051","url":null,"abstract":"In this paper, we present a model based on the Neural Network (NN) for classifying Arabic texts. We propose the use of Singular Value Decomposition (SVD) as a preprocessor of NN with the aim of further reducing data in terms of both size and dimensionality. Indeed, the use of SVD makes data more amenable to classification and the convergence training process faster. Specifically, the effectiveness of the Multilayer Perceptron (MLP) and the Radial Basis Function (RBF) classifiers are implemented. Experiments are conducted using an in-house corpus of Arabic texts. Precision, recall and F-measure are used to quantify categorization effectiveness. The results show that the proposed SVD-Supported MLP/RBF ANN classifier is able to achieve high effectiveness. Experimental results also show that the MLP classifier outperforms the RBF classifier and that the SVD-supported NN classifier is better than the basic NN, as far as Arabic text categorization is concerned.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128436827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5648152
Mahdia Bakalem, N. Benblidia, S. Oukid
The image retrieval is a particular case of information retrieval. It adds more complex mechanisms to relevance image retrieval: visual content analysis and/or additional textual content. The image auto annotation is a technique that associates text to image, and permits to retrieve image documents as textual documents, thus as in information retrieval. The image auto annotation is then an effective technology for improving the image retrieval. In this work, we propose the AnnotB-LSA algorithm in its first version for the image auto-annotation. The integration of the LSA model permits to extract the latent semantic relations in the textual describers and to minimize the ambiguousness (polysemy, synonymy) between the annotations of images.
{"title":"Latent semantic analysis-based image auto annotation","authors":"Mahdia Bakalem, N. Benblidia, S. Oukid","doi":"10.1109/ICMWI.2010.5648152","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648152","url":null,"abstract":"The image retrieval is a particular case of information retrieval. It adds more complex mechanisms to relevance image retrieval: visual content analysis and/or additional textual content. The image auto annotation is a technique that associates text to image, and permits to retrieve image documents as textual documents, thus as in information retrieval. The image auto annotation is then an effective technology for improving the image retrieval. In this work, we propose the AnnotB-LSA algorithm in its first version for the image auto-annotation. The integration of the LSA model permits to extract the latent semantic relations in the textual describers and to minimize the ambiguousness (polysemy, synonymy) between the annotations of images.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127488149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2010-11-29DOI: 10.1109/ICMWI.2010.5648199
Chaoui Mohammed, L. M. Tayeb
The field of education has always been closely connected with information and communication technologies (ICT). Currently, we perceive that digital and network technologies increase in their importance where the Web plays a central role. In an application of online learning, Web is used as representative supports for producing, managing and distributing contents. This space is evolving rapidly and is governed by factors regarding its function and modes of signification, so it poses difficulties for teachers, especially for extracting the relevant informations. It is in this context that our research work is situated. Our goal was to offer a Web-based architecture for an e-learning domain. First time using the Web as a medium documentary based on a search engine 'Google', going beyond to the proposal of a model for creating a field for e-Learning. Then, we studied the evolutionary lines of the semantic web and more specifically anthologies, creating and integrating ontology in the same model. We finished by applying a filtering method to extract the relevant parts to build the field of online education.
{"title":"Automatic construction of an on-line learning domain","authors":"Chaoui Mohammed, L. M. Tayeb","doi":"10.1109/ICMWI.2010.5648199","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648199","url":null,"abstract":"The field of education has always been closely connected with information and communication technologies (ICT). Currently, we perceive that digital and network technologies increase in their importance where the Web plays a central role. In an application of online learning, Web is used as representative supports for producing, managing and distributing contents. This space is evolving rapidly and is governed by factors regarding its function and modes of signification, so it poses difficulties for teachers, especially for extracting the relevant informations. It is in this context that our research work is situated. Our goal was to offer a Web-based architecture for an e-learning domain. First time using the Web as a medium documentary based on a search engine 'Google', going beyond to the proposal of a model for creating a field for e-Learning. Then, we studied the evolutionary lines of the semantic web and more specifically anthologies, creating and integrating ontology in the same model. We finished by applying a filtering method to extract the relevant parts to build the field of online education.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115928440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}