首页 > 最新文献

2010 International Conference on Machine and Web Intelligence最新文献

英文 中文
A morphological analysis of Arabic language based on multicriteria decision making: TAGHIT system 基于多标准决策的阿拉伯语词法分析:TAGHIT系统
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5647958
Cheragui Mohamed Amine, Hoceini Youssef, Abbas Moncef
In this paper, we present our work on Arabic morphology and especially the mechanisms for resolving the morphological ambiguity in Arabic text. These researches, which have given birth to TAGHIT system which is a morphosyntactic tagger for Arabic, where the originality of our work lies in the implementation of our internal system of a new approach to disambiguation different from those that currently exist, which is based on the principles and techniques issued from multicriteria decision making.
在本文中,我们介绍了我们在阿拉伯语形态学方面的工作,特别是解决阿拉伯语文本中形态歧义的机制。这些研究产生了TAGHIT系统,这是一个阿拉伯语的形态句法标注器,我们工作的独创性在于实现了我们内部系统的一种新的消歧方法,这种方法不同于现有的方法,它基于多标准决策发布的原则和技术。
{"title":"A morphological analysis of Arabic language based on multicriteria decision making: TAGHIT system","authors":"Cheragui Mohamed Amine, Hoceini Youssef, Abbas Moncef","doi":"10.1109/ICMWI.2010.5647958","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647958","url":null,"abstract":"In this paper, we present our work on Arabic morphology and especially the mechanisms for resolving the morphological ambiguity in Arabic text. These researches, which have given birth to TAGHIT system which is a morphosyntactic tagger for Arabic, where the originality of our work lies in the implementation of our internal system of a new approach to disambiguation different from those that currently exist, which is based on the principles and techniques issued from multicriteria decision making.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125715099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An encryption algorithm inspired from DNA 一种受DNA启发的加密算法
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5648076
Souhila Sadeg, Mohamed Gougache, N. Mansouri, H. Drias
DNA cryptography is a new promising direction in cryptography research that emerged with the progress in DNA computing field. DNA can be used not only to store and transmit information, but also to perform computations. The massive parallelism and extraordinary information density inherent in this molecule are exploited for cryptographic purposes, and several DNA based algorithms are proposed for encryption, authentification and so on. The current main difficulties of DNA cryptography are the absence of theoretical basis, the high tech lab requirements and computation limitations. In this paper, a symmetric key bloc cipher algorithm is proposed. It includes a step that simulates ideas from the processes of transcription (transfer from DNA to mRNA) and translation (from mRNA into amino acids). This algorithm is, we believe, efficient in computation and very secure, since it was designed following recommendations of experts in cryptography and focuses on the application of the fundamental principles of Shannon: Confusion and diffusion. Tests were conducted and the results are very satisfactory.
DNA密码学是随着DNA计算领域的发展而出现的一个新的密码学研究方向。DNA不仅可以用来存储和传输信息,还可以用来进行计算。利用这种分子固有的大量并行性和非凡的信息密度用于加密目的,并提出了几种基于DNA的加密、认证等算法。目前DNA密码学的主要困难是理论基础的缺乏、高技术实验室的要求和计算的限制。本文提出了一种对称密钥分组密码算法。它包括一个模拟转录(从DNA转移到mRNA)和翻译(从mRNA转化为氨基酸)过程的步骤。我们认为,该算法计算效率高,非常安全,因为它是根据密码学专家的建议设计的,并专注于香农基本原则的应用:混淆和扩散。进行了测试,结果非常令人满意。
{"title":"An encryption algorithm inspired from DNA","authors":"Souhila Sadeg, Mohamed Gougache, N. Mansouri, H. Drias","doi":"10.1109/ICMWI.2010.5648076","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648076","url":null,"abstract":"DNA cryptography is a new promising direction in cryptography research that emerged with the progress in DNA computing field. DNA can be used not only to store and transmit information, but also to perform computations. The massive parallelism and extraordinary information density inherent in this molecule are exploited for cryptographic purposes, and several DNA based algorithms are proposed for encryption, authentification and so on. The current main difficulties of DNA cryptography are the absence of theoretical basis, the high tech lab requirements and computation limitations. In this paper, a symmetric key bloc cipher algorithm is proposed. It includes a step that simulates ideas from the processes of transcription (transfer from DNA to mRNA) and translation (from mRNA into amino acids). This algorithm is, we believe, efficient in computation and very secure, since it was designed following recommendations of experts in cryptography and focuses on the application of the fundamental principles of Shannon: Confusion and diffusion. Tests were conducted and the results are very satisfactory.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127151538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
Integrating legacy systems in a SOA using an agent based approach for information system agility 使用基于代理的方法在SOA中集成遗留系统,以实现信息系统的敏捷性
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5648011
H. Faycal, D. Habiba, Mellah Hakima
This paper presents an approach based on multi-agent system (MAS) for encapsulating the features of traditional applications also called legacy systems. We focus our interest particularly on legacy based on Comment Object Request Broker Architecture (CORBA) technology. The encapsulation main objective is to simplify the possibilities for integrating this kind of application in a service-oriented architecture (SOA). We design an interface using Java Agent DEvelopment framework (JADE), which enables automatic generation of code for CORBA clients and ontology classes. The proposed system creates for each feature to wrap a representative agent, and allows the composition of functions according to predefined templates. The system uses the Web Service Integration Gateway (WSIG) to publish capacities of representative agents as web services that will be used in a SOA.
本文提出了一种基于多智能体系统(MAS)的方法来封装传统应用程序(也称为遗留系统)的特性。我们特别关注基于评论对象请求代理体系结构(CORBA)技术的遗留问题。封装的主要目标是简化在面向服务的体系结构(SOA)中集成这类应用程序的可能性。我们使用Java代理开发框架(JADE)设计了一个接口,它可以自动生成CORBA客户端和本体类的代码。该系统为每个特征创建一个代表代理,并允许根据预定义模板进行功能组合。系统使用Web服务集成网关(WSIG)将代表性代理的能力发布为将在SOA中使用的Web服务。
{"title":"Integrating legacy systems in a SOA using an agent based approach for information system agility","authors":"H. Faycal, D. Habiba, Mellah Hakima","doi":"10.1109/ICMWI.2010.5648011","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648011","url":null,"abstract":"This paper presents an approach based on multi-agent system (MAS) for encapsulating the features of traditional applications also called legacy systems. We focus our interest particularly on legacy based on Comment Object Request Broker Architecture (CORBA) technology. The encapsulation main objective is to simplify the possibilities for integrating this kind of application in a service-oriented architecture (SOA). We design an interface using Java Agent DEvelopment framework (JADE), which enables automatic generation of code for CORBA clients and ontology classes. The proposed system creates for each feature to wrap a representative agent, and allows the composition of functions according to predefined templates. The system uses the Web Service Integration Gateway (WSIG) to publish capacities of representative agents as web services that will be used in a SOA.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117057208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Dynamic threshold for replicas placement strategy 副本放置策略的动态阈值
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5647887
Mohamed Redha Djebbara, H. Belbachir
The data replication is a very important technique for the availability of data in the grids. One of the challenges in data replication is the replicas placement. In this paper, we present our contribution by proposing a replicas placement strategy in a hierarchical grid. Our approach is based on a dynamic threshold, contrary to the other strategies of replicas placement which use a static threshold. In our strategy we show that the threshold depends on several factors such as the size of the data to be replicated, the consumed bandwidth which is explained by the level of the tree related to the grid.
数据复制是保证网格中数据可用性的一项重要技术。数据复制中的挑战之一是副本的放置。在本文中,我们提出了一种在分层网格中放置副本的策略。我们的方法是基于一个动态阈值,与其他使用静态阈值的副本放置策略相反。在我们的策略中,我们表明阈值取决于几个因素,如要复制的数据的大小,消耗的带宽,这是由与网格相关的树的级别解释的。
{"title":"Dynamic threshold for replicas placement strategy","authors":"Mohamed Redha Djebbara, H. Belbachir","doi":"10.1109/ICMWI.2010.5647887","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647887","url":null,"abstract":"The data replication is a very important technique for the availability of data in the grids. One of the challenges in data replication is the replicas placement. In this paper, we present our contribution by proposing a replicas placement strategy in a hierarchical grid. Our approach is based on a dynamic threshold, contrary to the other strategies of replicas placement which use a static threshold. In our strategy we show that the threshold depends on several factors such as the size of the data to be replicated, the consumed bandwidth which is explained by the level of the tree related to the grid.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124624386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient extraction of news articles based on RSS crawling 基于RSS抓取的新闻文章高效提取
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5647851
George Adam, C. Bouras, V. Poulopoulos
The expansion of the World Wide Web has led to a state where a vast amount of Internet users face and have to overcome the major problem of discovering desired information. It is inevitable that hundreds of web pages and weblogs are generated daily or changing on a daily basis. The main problem that arises from the continuous generation and alteration of web pages is the discovery of useful information, a task that becomes difficult even for the experienced internet users. Many mechanisms have been constructed and presented in order to overcome the puzzle of information discovery on the Internet and they are mostly based on crawlers which are browsing the WWW, downloading pages and collect the information that might be of user interest. In this manuscript we describe a mechanism that fetches web pages that include news articles from major news portals and blogs. This mechanism is constructed in order to support tools that are used to acquire news articles from all over the world, process them and present them back to the end users in a personalized manner.
万维网的发展导致大量互联网用户面临并不得不克服发现所需信息的主要问题。每天都有成百上千的网页和博客生成,或者每天都在变化,这是不可避免的。网页的不断生成和更改所产生的主要问题是发现有用的信息,即使对有经验的互联网用户来说,这一任务也变得困难。为了克服互联网上的信息发现难题,已经建立和提出了许多机制,它们大多基于爬虫程序,它们浏览WWW,下载页面并收集用户可能感兴趣的信息。在本文中,我们描述了一种从主要新闻门户和博客获取包含新闻文章的网页的机制。该机制的构建是为了支持用于从世界各地获取新闻文章、对其进行处理并以个性化的方式将其呈现给最终用户的工具。
{"title":"Efficient extraction of news articles based on RSS crawling","authors":"George Adam, C. Bouras, V. Poulopoulos","doi":"10.1109/ICMWI.2010.5647851","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647851","url":null,"abstract":"The expansion of the World Wide Web has led to a state where a vast amount of Internet users face and have to overcome the major problem of discovering desired information. It is inevitable that hundreds of web pages and weblogs are generated daily or changing on a daily basis. The main problem that arises from the continuous generation and alteration of web pages is the discovery of useful information, a task that becomes difficult even for the experienced internet users. Many mechanisms have been constructed and presented in order to overcome the puzzle of information discovery on the Internet and they are mostly based on crawlers which are browsing the WWW, downloading pages and collect the information that might be of user interest. In this manuscript we describe a mechanism that fetches web pages that include news articles from major news portals and blogs. This mechanism is constructed in order to support tools that are used to acquire news articles from all over the world, process them and present them back to the end users in a personalized manner.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127040545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A metacomputing approach for the winner determination problem in combinatorial auctions 组合拍卖中标者确定问题的元计算方法
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5647909
Kahina Achour, Louiza Slaouti, D. Boughaci
Grid computing is an innovative approach permitting the use of computing resources which are far apart and connected by Wide Area Networks. This recent technology has become extremely popular to optimize computing resources and manage data and computing workloads. The aim of this paper is to propose a metacomputing approach for the winner determination problem in combinatorial auctions (WDP). The proposed approach is a hybrid genetic algorithm adapted to the WDP and implemented on a grid computing platform.
网格计算是一种创新的方法,它允许使用由广域网连接的相隔很远的计算资源。这种最近的技术在优化计算资源、管理数据和计算工作负载方面变得非常流行。本文的目的是提出一种用于组合拍卖中标者确定问题的元计算方法。该方法是一种适应WDP的混合遗传算法,并在网格计算平台上实现。
{"title":"A metacomputing approach for the winner determination problem in combinatorial auctions","authors":"Kahina Achour, Louiza Slaouti, D. Boughaci","doi":"10.1109/ICMWI.2010.5647909","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647909","url":null,"abstract":"Grid computing is an innovative approach permitting the use of computing resources which are far apart and connected by Wide Area Networks. This recent technology has become extremely popular to optimize computing resources and manage data and computing workloads. The aim of this paper is to propose a metacomputing approach for the winner determination problem in combinatorial auctions (WDP). The proposed approach is a hybrid genetic algorithm adapted to the WDP and implemented on a grid computing platform.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131113985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Building a neural network-based English-to-Arabic transfer module from an unrestricted domain 构建一个基于神经网络的英语-阿拉伯语转换模块
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5648157
Rasha Al Dam, A. Guessoum
This paper presents a Transfer Module for an English-to-Arabic Machine Translation System (MTS) using an English-to-Arabic Bilingual Corpus. We propose an approach to build a transfer module by building a new transfer-based system for machine translation using Artificial Neural Networks (ANN). The idea is to allow the ANN-based transfer module to automatically learn correspondences between source and target language structures using a large set of English sentences and their Arabic translations. The paper presents the methodology for corpus building. It then introduces the approach that has been followed to develop the transfer module. It finally presents the experimental results which are very encouraging.
本文提出了一个基于英阿双语语料库的英阿机器翻译系统(MTS)的转换模块。本文提出了一种利用人工神经网络(ANN)构建一个新的基于迁移的机器翻译系统来构建迁移模块的方法。这个想法是允许基于人工神经网络的迁移模块使用大量英语句子及其阿拉伯语翻译自动学习源语言和目标语言结构之间的对应关系。本文介绍了语料库构建的方法。然后介绍了开发传输模块所遵循的方法。最后给出了令人鼓舞的实验结果。
{"title":"Building a neural network-based English-to-Arabic transfer module from an unrestricted domain","authors":"Rasha Al Dam, A. Guessoum","doi":"10.1109/ICMWI.2010.5648157","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648157","url":null,"abstract":"This paper presents a Transfer Module for an English-to-Arabic Machine Translation System (MTS) using an English-to-Arabic Bilingual Corpus. We propose an approach to build a transfer module by building a new transfer-based system for machine translation using Artificial Neural Networks (ANN). The idea is to allow the ANN-based transfer module to automatically learn correspondences between source and target language structures using a large set of English sentences and their Arabic translations. The paper presents the methodology for corpus building. It then introduces the approach that has been followed to develop the transfer module. It finally presents the experimental results which are very encouraging.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127455357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A comparative study of Neural networks architectures on Arabic text categorization using feature extraction 基于特征提取的阿拉伯文本分类神经网络体系结构的比较研究
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5648051
F. Harrag, A. Al-Salman, Mohammed Benmohammed
In this paper, we present a model based on the Neural Network (NN) for classifying Arabic texts. We propose the use of Singular Value Decomposition (SVD) as a preprocessor of NN with the aim of further reducing data in terms of both size and dimensionality. Indeed, the use of SVD makes data more amenable to classification and the convergence training process faster. Specifically, the effectiveness of the Multilayer Perceptron (MLP) and the Radial Basis Function (RBF) classifiers are implemented. Experiments are conducted using an in-house corpus of Arabic texts. Precision, recall and F-measure are used to quantify categorization effectiveness. The results show that the proposed SVD-Supported MLP/RBF ANN classifier is able to achieve high effectiveness. Experimental results also show that the MLP classifier outperforms the RBF classifier and that the SVD-supported NN classifier is better than the basic NN, as far as Arabic text categorization is concerned.
本文提出了一种基于神经网络(NN)的阿拉伯语文本分类模型。我们提出使用奇异值分解(SVD)作为神经网络的预处理,目的是进一步减少数据的大小和维数。事实上,SVD的使用使数据更易于分类,并且收敛训练过程更快。具体来说,实现了多层感知器(MLP)和径向基函数(RBF)分类器的有效性。实验是使用内部的阿拉伯语文本语料库进行的。Precision, recall和F-measure用于量化分类效果。结果表明,本文提出的基于奇异值支持的MLP/RBF神经网络分类器能够达到较高的分类效率。实验结果还表明,就阿拉伯文本分类而言,MLP分类器优于RBF分类器,支持svd的神经网络分类器优于基本神经网络。
{"title":"A comparative study of Neural networks architectures on Arabic text categorization using feature extraction","authors":"F. Harrag, A. Al-Salman, Mohammed Benmohammed","doi":"10.1109/ICMWI.2010.5648051","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648051","url":null,"abstract":"In this paper, we present a model based on the Neural Network (NN) for classifying Arabic texts. We propose the use of Singular Value Decomposition (SVD) as a preprocessor of NN with the aim of further reducing data in terms of both size and dimensionality. Indeed, the use of SVD makes data more amenable to classification and the convergence training process faster. Specifically, the effectiveness of the Multilayer Perceptron (MLP) and the Radial Basis Function (RBF) classifiers are implemented. Experiments are conducted using an in-house corpus of Arabic texts. Precision, recall and F-measure are used to quantify categorization effectiveness. The results show that the proposed SVD-Supported MLP/RBF ANN classifier is able to achieve high effectiveness. Experimental results also show that the MLP classifier outperforms the RBF classifier and that the SVD-supported NN classifier is better than the basic NN, as far as Arabic text categorization is concerned.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128436827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Latent semantic analysis-based image auto annotation 基于潜在语义分析的图像自动标注
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5648152
Mahdia Bakalem, N. Benblidia, S. Oukid
The image retrieval is a particular case of information retrieval. It adds more complex mechanisms to relevance image retrieval: visual content analysis and/or additional textual content. The image auto annotation is a technique that associates text to image, and permits to retrieve image documents as textual documents, thus as in information retrieval. The image auto annotation is then an effective technology for improving the image retrieval. In this work, we propose the AnnotB-LSA algorithm in its first version for the image auto-annotation. The integration of the LSA model permits to extract the latent semantic relations in the textual describers and to minimize the ambiguousness (polysemy, synonymy) between the annotations of images.
图像检索是信息检索的一种特殊情况。它为相关图像检索添加了更复杂的机制:视觉内容分析和/或额外的文本内容。图像自动标注是一种将文本与图像关联起来的技术,它允许将图像文档作为文本文档检索,从而实现信息检索。图像自动标注是改进图像检索的一种有效技术。在这项工作中,我们提出了用于图像自动注释的AnnotB-LSA算法的第一个版本。LSA模型的集成可以提取文本描述符中的潜在语义关系,并最大限度地减少图像注释之间的歧义(多义、同义)。
{"title":"Latent semantic analysis-based image auto annotation","authors":"Mahdia Bakalem, N. Benblidia, S. Oukid","doi":"10.1109/ICMWI.2010.5648152","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648152","url":null,"abstract":"The image retrieval is a particular case of information retrieval. It adds more complex mechanisms to relevance image retrieval: visual content analysis and/or additional textual content. The image auto annotation is a technique that associates text to image, and permits to retrieve image documents as textual documents, thus as in information retrieval. The image auto annotation is then an effective technology for improving the image retrieval. In this work, we propose the AnnotB-LSA algorithm in its first version for the image auto-annotation. The integration of the LSA model permits to extract the latent semantic relations in the textual describers and to minimize the ambiguousness (polysemy, synonymy) between the annotations of images.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127488149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Automatic construction of an on-line learning domain 在线学习域的自动构建
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5648199
Chaoui Mohammed, L. M. Tayeb
The field of education has always been closely connected with information and communication technologies (ICT). Currently, we perceive that digital and network technologies increase in their importance where the Web plays a central role. In an application of online learning, Web is used as representative supports for producing, managing and distributing contents. This space is evolving rapidly and is governed by factors regarding its function and modes of signification, so it poses difficulties for teachers, especially for extracting the relevant informations. It is in this context that our research work is situated. Our goal was to offer a Web-based architecture for an e-learning domain. First time using the Web as a medium documentary based on a search engine 'Google', going beyond to the proposal of a model for creating a field for e-Learning. Then, we studied the evolutionary lines of the semantic web and more specifically anthologies, creating and integrating ontology in the same model. We finished by applying a filtering method to extract the relevant parts to build the field of online education.
教育领域一直与信息通信技术(ICT)密切相关。目前,我们意识到数字和网络技术的重要性在增加,其中网络扮演着中心角色。在一个在线学习的应用中,Web作为内容的生成、管理和分发的代表性支持。这一空间变化迅速,受其功能和意义方式等因素的制约,给教师尤其是提取相关信息带来了困难。我们的研究工作就是在这种背景下进行的。我们的目标是为电子学习领域提供一个基于web的体系结构。第一次使用网络作为基于搜索引擎“谷歌”的媒介纪录片,超越了为电子学习创建一个领域的模型的建议。然后,我们研究了语义网和文集的进化路线,在同一模型中创建和集成本体。最后,我们采用一种过滤的方法提取出相关的部分来构建在线教育领域。
{"title":"Automatic construction of an on-line learning domain","authors":"Chaoui Mohammed, L. M. Tayeb","doi":"10.1109/ICMWI.2010.5648199","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648199","url":null,"abstract":"The field of education has always been closely connected with information and communication technologies (ICT). Currently, we perceive that digital and network technologies increase in their importance where the Web plays a central role. In an application of online learning, Web is used as representative supports for producing, managing and distributing contents. This space is evolving rapidly and is governed by factors regarding its function and modes of signification, so it poses difficulties for teachers, especially for extracting the relevant informations. It is in this context that our research work is situated. Our goal was to offer a Web-based architecture for an e-learning domain. First time using the Web as a medium documentary based on a search engine 'Google', going beyond to the proposal of a model for creating a field for e-Learning. Then, we studied the evolutionary lines of the semantic web and more specifically anthologies, creating and integrating ontology in the same model. We finished by applying a filtering method to extract the relevant parts to build the field of online education.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115928440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2010 International Conference on Machine and Web Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1