首页 > 最新文献

2010 International Conference on Machine and Web Intelligence最新文献

英文 中文
An encryption algorithm inspired from DNA 一种受DNA启发的加密算法
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5648076
Souhila Sadeg, Mohamed Gougache, N. Mansouri, H. Drias
DNA cryptography is a new promising direction in cryptography research that emerged with the progress in DNA computing field. DNA can be used not only to store and transmit information, but also to perform computations. The massive parallelism and extraordinary information density inherent in this molecule are exploited for cryptographic purposes, and several DNA based algorithms are proposed for encryption, authentification and so on. The current main difficulties of DNA cryptography are the absence of theoretical basis, the high tech lab requirements and computation limitations. In this paper, a symmetric key bloc cipher algorithm is proposed. It includes a step that simulates ideas from the processes of transcription (transfer from DNA to mRNA) and translation (from mRNA into amino acids). This algorithm is, we believe, efficient in computation and very secure, since it was designed following recommendations of experts in cryptography and focuses on the application of the fundamental principles of Shannon: Confusion and diffusion. Tests were conducted and the results are very satisfactory.
DNA密码学是随着DNA计算领域的发展而出现的一个新的密码学研究方向。DNA不仅可以用来存储和传输信息,还可以用来进行计算。利用这种分子固有的大量并行性和非凡的信息密度用于加密目的,并提出了几种基于DNA的加密、认证等算法。目前DNA密码学的主要困难是理论基础的缺乏、高技术实验室的要求和计算的限制。本文提出了一种对称密钥分组密码算法。它包括一个模拟转录(从DNA转移到mRNA)和翻译(从mRNA转化为氨基酸)过程的步骤。我们认为,该算法计算效率高,非常安全,因为它是根据密码学专家的建议设计的,并专注于香农基本原则的应用:混淆和扩散。进行了测试,结果非常令人满意。
{"title":"An encryption algorithm inspired from DNA","authors":"Souhila Sadeg, Mohamed Gougache, N. Mansouri, H. Drias","doi":"10.1109/ICMWI.2010.5648076","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648076","url":null,"abstract":"DNA cryptography is a new promising direction in cryptography research that emerged with the progress in DNA computing field. DNA can be used not only to store and transmit information, but also to perform computations. The massive parallelism and extraordinary information density inherent in this molecule are exploited for cryptographic purposes, and several DNA based algorithms are proposed for encryption, authentification and so on. The current main difficulties of DNA cryptography are the absence of theoretical basis, the high tech lab requirements and computation limitations. In this paper, a symmetric key bloc cipher algorithm is proposed. It includes a step that simulates ideas from the processes of transcription (transfer from DNA to mRNA) and translation (from mRNA into amino acids). This algorithm is, we believe, efficient in computation and very secure, since it was designed following recommendations of experts in cryptography and focuses on the application of the fundamental principles of Shannon: Confusion and diffusion. Tests were conducted and the results are very satisfactory.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127151538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
Building a neural network-based English-to-Arabic transfer module from an unrestricted domain 构建一个基于神经网络的英语-阿拉伯语转换模块
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5648157
Rasha Al Dam, A. Guessoum
This paper presents a Transfer Module for an English-to-Arabic Machine Translation System (MTS) using an English-to-Arabic Bilingual Corpus. We propose an approach to build a transfer module by building a new transfer-based system for machine translation using Artificial Neural Networks (ANN). The idea is to allow the ANN-based transfer module to automatically learn correspondences between source and target language structures using a large set of English sentences and their Arabic translations. The paper presents the methodology for corpus building. It then introduces the approach that has been followed to develop the transfer module. It finally presents the experimental results which are very encouraging.
本文提出了一个基于英阿双语语料库的英阿机器翻译系统(MTS)的转换模块。本文提出了一种利用人工神经网络(ANN)构建一个新的基于迁移的机器翻译系统来构建迁移模块的方法。这个想法是允许基于人工神经网络的迁移模块使用大量英语句子及其阿拉伯语翻译自动学习源语言和目标语言结构之间的对应关系。本文介绍了语料库构建的方法。然后介绍了开发传输模块所遵循的方法。最后给出了令人鼓舞的实验结果。
{"title":"Building a neural network-based English-to-Arabic transfer module from an unrestricted domain","authors":"Rasha Al Dam, A. Guessoum","doi":"10.1109/ICMWI.2010.5648157","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648157","url":null,"abstract":"This paper presents a Transfer Module for an English-to-Arabic Machine Translation System (MTS) using an English-to-Arabic Bilingual Corpus. We propose an approach to build a transfer module by building a new transfer-based system for machine translation using Artificial Neural Networks (ANN). The idea is to allow the ANN-based transfer module to automatically learn correspondences between source and target language structures using a large set of English sentences and their Arabic translations. The paper presents the methodology for corpus building. It then introduces the approach that has been followed to develop the transfer module. It finally presents the experimental results which are very encouraging.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127455357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Integrating legacy systems in a SOA using an agent based approach for information system agility 使用基于代理的方法在SOA中集成遗留系统,以实现信息系统的敏捷性
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5648011
H. Faycal, D. Habiba, Mellah Hakima
This paper presents an approach based on multi-agent system (MAS) for encapsulating the features of traditional applications also called legacy systems. We focus our interest particularly on legacy based on Comment Object Request Broker Architecture (CORBA) technology. The encapsulation main objective is to simplify the possibilities for integrating this kind of application in a service-oriented architecture (SOA). We design an interface using Java Agent DEvelopment framework (JADE), which enables automatic generation of code for CORBA clients and ontology classes. The proposed system creates for each feature to wrap a representative agent, and allows the composition of functions according to predefined templates. The system uses the Web Service Integration Gateway (WSIG) to publish capacities of representative agents as web services that will be used in a SOA.
本文提出了一种基于多智能体系统(MAS)的方法来封装传统应用程序(也称为遗留系统)的特性。我们特别关注基于评论对象请求代理体系结构(CORBA)技术的遗留问题。封装的主要目标是简化在面向服务的体系结构(SOA)中集成这类应用程序的可能性。我们使用Java代理开发框架(JADE)设计了一个接口,它可以自动生成CORBA客户端和本体类的代码。该系统为每个特征创建一个代表代理,并允许根据预定义模板进行功能组合。系统使用Web服务集成网关(WSIG)将代表性代理的能力发布为将在SOA中使用的Web服务。
{"title":"Integrating legacy systems in a SOA using an agent based approach for information system agility","authors":"H. Faycal, D. Habiba, Mellah Hakima","doi":"10.1109/ICMWI.2010.5648011","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648011","url":null,"abstract":"This paper presents an approach based on multi-agent system (MAS) for encapsulating the features of traditional applications also called legacy systems. We focus our interest particularly on legacy based on Comment Object Request Broker Architecture (CORBA) technology. The encapsulation main objective is to simplify the possibilities for integrating this kind of application in a service-oriented architecture (SOA). We design an interface using Java Agent DEvelopment framework (JADE), which enables automatic generation of code for CORBA clients and ontology classes. The proposed system creates for each feature to wrap a representative agent, and allows the composition of functions according to predefined templates. The system uses the Web Service Integration Gateway (WSIG) to publish capacities of representative agents as web services that will be used in a SOA.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117057208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A metacomputing approach for the winner determination problem in combinatorial auctions 组合拍卖中标者确定问题的元计算方法
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5647909
Kahina Achour, Louiza Slaouti, D. Boughaci
Grid computing is an innovative approach permitting the use of computing resources which are far apart and connected by Wide Area Networks. This recent technology has become extremely popular to optimize computing resources and manage data and computing workloads. The aim of this paper is to propose a metacomputing approach for the winner determination problem in combinatorial auctions (WDP). The proposed approach is a hybrid genetic algorithm adapted to the WDP and implemented on a grid computing platform.
网格计算是一种创新的方法,它允许使用由广域网连接的相隔很远的计算资源。这种最近的技术在优化计算资源、管理数据和计算工作负载方面变得非常流行。本文的目的是提出一种用于组合拍卖中标者确定问题的元计算方法。该方法是一种适应WDP的混合遗传算法,并在网格计算平台上实现。
{"title":"A metacomputing approach for the winner determination problem in combinatorial auctions","authors":"Kahina Achour, Louiza Slaouti, D. Boughaci","doi":"10.1109/ICMWI.2010.5647909","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647909","url":null,"abstract":"Grid computing is an innovative approach permitting the use of computing resources which are far apart and connected by Wide Area Networks. This recent technology has become extremely popular to optimize computing resources and manage data and computing workloads. The aim of this paper is to propose a metacomputing approach for the winner determination problem in combinatorial auctions (WDP). The proposed approach is a hybrid genetic algorithm adapted to the WDP and implemented on a grid computing platform.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131113985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Efficient extraction of news articles based on RSS crawling 基于RSS抓取的新闻文章高效提取
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5647851
George Adam, C. Bouras, V. Poulopoulos
The expansion of the World Wide Web has led to a state where a vast amount of Internet users face and have to overcome the major problem of discovering desired information. It is inevitable that hundreds of web pages and weblogs are generated daily or changing on a daily basis. The main problem that arises from the continuous generation and alteration of web pages is the discovery of useful information, a task that becomes difficult even for the experienced internet users. Many mechanisms have been constructed and presented in order to overcome the puzzle of information discovery on the Internet and they are mostly based on crawlers which are browsing the WWW, downloading pages and collect the information that might be of user interest. In this manuscript we describe a mechanism that fetches web pages that include news articles from major news portals and blogs. This mechanism is constructed in order to support tools that are used to acquire news articles from all over the world, process them and present them back to the end users in a personalized manner.
万维网的发展导致大量互联网用户面临并不得不克服发现所需信息的主要问题。每天都有成百上千的网页和博客生成,或者每天都在变化,这是不可避免的。网页的不断生成和更改所产生的主要问题是发现有用的信息,即使对有经验的互联网用户来说,这一任务也变得困难。为了克服互联网上的信息发现难题,已经建立和提出了许多机制,它们大多基于爬虫程序,它们浏览WWW,下载页面并收集用户可能感兴趣的信息。在本文中,我们描述了一种从主要新闻门户和博客获取包含新闻文章的网页的机制。该机制的构建是为了支持用于从世界各地获取新闻文章、对其进行处理并以个性化的方式将其呈现给最终用户的工具。
{"title":"Efficient extraction of news articles based on RSS crawling","authors":"George Adam, C. Bouras, V. Poulopoulos","doi":"10.1109/ICMWI.2010.5647851","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647851","url":null,"abstract":"The expansion of the World Wide Web has led to a state where a vast amount of Internet users face and have to overcome the major problem of discovering desired information. It is inevitable that hundreds of web pages and weblogs are generated daily or changing on a daily basis. The main problem that arises from the continuous generation and alteration of web pages is the discovery of useful information, a task that becomes difficult even for the experienced internet users. Many mechanisms have been constructed and presented in order to overcome the puzzle of information discovery on the Internet and they are mostly based on crawlers which are browsing the WWW, downloading pages and collect the information that might be of user interest. In this manuscript we describe a mechanism that fetches web pages that include news articles from major news portals and blogs. This mechanism is constructed in order to support tools that are used to acquire news articles from all over the world, process them and present them back to the end users in a personalized manner.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127040545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Towards ontological model accuracy's scalability: Application to the Pervasive Computer Supported Collaborative Work 面向本体模型精度的可扩展性:在普适计算机支持的协同工作中的应用
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5647999
K. Hamadache, L. Lancieri
In this paper we define an ontological model to accurately represent the context in Pervasive Computer Supported Collaborative Work. A major issue in this domain is the mass of information required to correctly depict a situation. As we need to represent users and devices according to multiple aspects (physical, computational, social …) the amount of information can quickly become unmanageable. Besides, as a PCSCW context model has to be usable on limited resources devices such as cell phones, GPS, ADSL Modems we needed a more efficient way to represent information. In this perspective the model we propose offers the possibility to represent a situation with more or less precision; that is to say with more or less abstraction. The final goal of this work is then to provide a model able to reason with a precise or fuzzy description of a situation.
在本文中,我们定义了一个本体模型来准确地表示普适计算机支持的协同工作中的上下文。该领域的一个主要问题是正确描述情况所需的大量信息。由于我们需要根据多个方面(物理的、计算的、社会的……)来表示用户和设备,信息量很快就会变得难以管理。此外,由于PCSCW上下文模型必须在有限的资源设备(如手机、GPS、ADSL调制解调器)上可用,我们需要一种更有效的方式来表示信息。从这个角度来看,我们提出的模型提供了或多或少精确地表示一种情况的可能性;也就是说,多少有些抽象。这项工作的最终目标是提供一个能够对情况进行精确或模糊描述的模型。
{"title":"Towards ontological model accuracy's scalability: Application to the Pervasive Computer Supported Collaborative Work","authors":"K. Hamadache, L. Lancieri","doi":"10.1109/ICMWI.2010.5647999","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647999","url":null,"abstract":"In this paper we define an ontological model to accurately represent the context in Pervasive Computer Supported Collaborative Work. A major issue in this domain is the mass of information required to correctly depict a situation. As we need to represent users and devices according to multiple aspects (physical, computational, social …) the amount of information can quickly become unmanageable. Besides, as a PCSCW context model has to be usable on limited resources devices such as cell phones, GPS, ADSL Modems we needed a more efficient way to represent information. In this perspective the model we propose offers the possibility to represent a situation with more or less precision; that is to say with more or less abstraction. The final goal of this work is then to provide a model able to reason with a precise or fuzzy description of a situation.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122116444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Disparity map estimation with neural network 基于神经网络的视差图估计
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5648182
Nadia Baha Touzene, S. Larabi
This work aims at defining a new approach for a dense disparity map computing based on the neural networks from a pair of stereo images. Our approach has been divided into two main tasks. The first one deals with computing the initial disparity map using a neuronal method (BP). Whereas the second one presents a simple method to refine the initial disparity map using neural refinement so that an accurate result can be acquired. In the literature, the matching score is based only on the pixel intensities. We introduce in this work two additional features: the gradient magnitude and orientation of the gradient vector of pixels which gives a true degree of similarity between pixels. Experimental results on real data sets were conducted for evaluating the proposed method.
本文旨在定义一种基于神经网络的立体图像密集视差图计算新方法。我们的方法分为两个主要任务。第一种方法是使用神经方法(BP)计算初始视差图。提出了一种简单的方法,利用神经网络对初始视差图进行细化,从而获得准确的结果。在文献中,匹配分数仅基于像素强度。在这项工作中,我们引入了两个额外的特征:梯度大小和像素梯度向量的方向,它们给出了像素之间的真实相似度。在实际数据集上进行了实验,对所提出的方法进行了验证。
{"title":"Disparity map estimation with neural network","authors":"Nadia Baha Touzene, S. Larabi","doi":"10.1109/ICMWI.2010.5648182","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648182","url":null,"abstract":"This work aims at defining a new approach for a dense disparity map computing based on the neural networks from a pair of stereo images. Our approach has been divided into two main tasks. The first one deals with computing the initial disparity map using a neuronal method (BP). Whereas the second one presents a simple method to refine the initial disparity map using neural refinement so that an accurate result can be acquired. In the literature, the matching score is based only on the pixel intensities. We introduce in this work two additional features: the gradient magnitude and orientation of the gradient vector of pixels which gives a true degree of similarity between pixels. Experimental results on real data sets were conducted for evaluating the proposed method.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"225 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114378762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A morphological analysis of Arabic language based on multicriteria decision making: TAGHIT system 基于多标准决策的阿拉伯语词法分析:TAGHIT系统
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5647958
Cheragui Mohamed Amine, Hoceini Youssef, Abbas Moncef
In this paper, we present our work on Arabic morphology and especially the mechanisms for resolving the morphological ambiguity in Arabic text. These researches, which have given birth to TAGHIT system which is a morphosyntactic tagger for Arabic, where the originality of our work lies in the implementation of our internal system of a new approach to disambiguation different from those that currently exist, which is based on the principles and techniques issued from multicriteria decision making.
在本文中,我们介绍了我们在阿拉伯语形态学方面的工作,特别是解决阿拉伯语文本中形态歧义的机制。这些研究产生了TAGHIT系统,这是一个阿拉伯语的形态句法标注器,我们工作的独创性在于实现了我们内部系统的一种新的消歧方法,这种方法不同于现有的方法,它基于多标准决策发布的原则和技术。
{"title":"A morphological analysis of Arabic language based on multicriteria decision making: TAGHIT system","authors":"Cheragui Mohamed Amine, Hoceini Youssef, Abbas Moncef","doi":"10.1109/ICMWI.2010.5647958","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647958","url":null,"abstract":"In this paper, we present our work on Arabic morphology and especially the mechanisms for resolving the morphological ambiguity in Arabic text. These researches, which have given birth to TAGHIT system which is a morphosyntactic tagger for Arabic, where the originality of our work lies in the implementation of our internal system of a new approach to disambiguation different from those that currently exist, which is based on the principles and techniques issued from multicriteria decision making.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125715099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A priori replica placement strategy in data grid 数据网格中的先验副本放置策略
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5647925
Zakia Challal, T. Bouabana-Tebibel
The use of grid computing is becoming increasingly important in the areas requiring large quantity of data and calculation. To provide better access time and fault tolerance in such systems, the replication is one of the main issues for this purpose. The effectiveness of a replication model depends on several factors, including the replicas placement strategy. In this paper, we propose an a priori replicas placement strategy optimizing distances between the data hosted on the grid.
在需要大量数据和计算的领域,网格计算的使用变得越来越重要。为了在这样的系统中提供更好的访问时间和容错性,复制是实现这一目的的主要问题之一。复制模型的有效性取决于几个因素,包括副本放置策略。在本文中,我们提出了一种先验的副本放置策略,优化网格上托管的数据之间的距离。
{"title":"A priori replica placement strategy in data grid","authors":"Zakia Challal, T. Bouabana-Tebibel","doi":"10.1109/ICMWI.2010.5647925","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5647925","url":null,"abstract":"The use of grid computing is becoming increasingly important in the areas requiring large quantity of data and calculation. To provide better access time and fault tolerance in such systems, the replication is one of the main issues for this purpose. The effectiveness of a replication model depends on several factors, including the replicas placement strategy. In this paper, we propose an a priori replicas placement strategy optimizing distances between the data hosted on the grid.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123235448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Latent semantic analysis-based image auto annotation 基于潜在语义分析的图像自动标注
Pub Date : 2010-11-29 DOI: 10.1109/ICMWI.2010.5648152
Mahdia Bakalem, N. Benblidia, S. Oukid
The image retrieval is a particular case of information retrieval. It adds more complex mechanisms to relevance image retrieval: visual content analysis and/or additional textual content. The image auto annotation is a technique that associates text to image, and permits to retrieve image documents as textual documents, thus as in information retrieval. The image auto annotation is then an effective technology for improving the image retrieval. In this work, we propose the AnnotB-LSA algorithm in its first version for the image auto-annotation. The integration of the LSA model permits to extract the latent semantic relations in the textual describers and to minimize the ambiguousness (polysemy, synonymy) between the annotations of images.
图像检索是信息检索的一种特殊情况。它为相关图像检索添加了更复杂的机制:视觉内容分析和/或额外的文本内容。图像自动标注是一种将文本与图像关联起来的技术,它允许将图像文档作为文本文档检索,从而实现信息检索。图像自动标注是改进图像检索的一种有效技术。在这项工作中,我们提出了用于图像自动注释的AnnotB-LSA算法的第一个版本。LSA模型的集成可以提取文本描述符中的潜在语义关系,并最大限度地减少图像注释之间的歧义(多义、同义)。
{"title":"Latent semantic analysis-based image auto annotation","authors":"Mahdia Bakalem, N. Benblidia, S. Oukid","doi":"10.1109/ICMWI.2010.5648152","DOIUrl":"https://doi.org/10.1109/ICMWI.2010.5648152","url":null,"abstract":"The image retrieval is a particular case of information retrieval. It adds more complex mechanisms to relevance image retrieval: visual content analysis and/or additional textual content. The image auto annotation is a technique that associates text to image, and permits to retrieve image documents as textual documents, thus as in information retrieval. The image auto annotation is then an effective technology for improving the image retrieval. In this work, we propose the AnnotB-LSA algorithm in its first version for the image auto-annotation. The integration of the LSA model permits to extract the latent semantic relations in the textual describers and to minimize the ambiguousness (polysemy, synonymy) between the annotations of images.","PeriodicalId":404577,"journal":{"name":"2010 International Conference on Machine and Web Intelligence","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127488149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2010 International Conference on Machine and Web Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1