首页 > 最新文献

2013 3rd IEEE International Advance Computing Conference (IACC)最新文献

英文 中文
MePPM- Memory efficient prediction by partial match model for web prefetching MePPM-通过部分匹配模型预测网页预取的内存效率
Pub Date : 2013-05-13 DOI: 10.1109/IADCC.2013.6514318
C. D. Gracia, S. Sudha
The proliferation of World Wide Web and the immense growth of Internet users and services requiring high bandwidth have increased the response time of the users substantially. Thus, users often experience long latency while retrieving web objects. The popularity of web objects and web sites show a considerable spatial locality that makes it possible to predict future accesses based on the previous accessed ones. This infact has motivated the researchers to devise new prefetching techniques in web so as to reduce the user perceived latency. Most of the research works are based on the standard Prediction by Partial Match model and its derivates such as the Longest Repeating Sequence and the Popularity based model that are built into Markov predictor trees using common surfing patterns. These models require lot of memory. Hence, in this paper, memory efficient Prediction by Partial Match models based on Markov model are proposed to minimize memory usage compared to the standard Prediction models and its derivatives.
万维网的普及和互联网用户的巨大增长以及需要高带宽的服务大大增加了用户的响应时间。因此,用户在检索web对象时经常会遇到很长的延迟。web对象和web站点的流行显示出相当大的空间局部性,这使得基于先前访问的访问来预测未来访问成为可能。这一事实促使研究人员设计新的网络预取技术,以减少用户感知的延迟。大多数研究工作都是基于标准的部分匹配预测模型及其衍生模型,如最长重复序列和基于流行度的模型,这些模型使用常见的冲浪模式构建到马尔可夫预测树中。这些模型需要大量内存。因此,本文提出了基于马尔可夫模型的部分匹配模型的内存高效预测,与标准预测模型及其衍生模型相比,可以最大限度地减少内存使用。
{"title":"MePPM- Memory efficient prediction by partial match model for web prefetching","authors":"C. D. Gracia, S. Sudha","doi":"10.1109/IADCC.2013.6514318","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514318","url":null,"abstract":"The proliferation of World Wide Web and the immense growth of Internet users and services requiring high bandwidth have increased the response time of the users substantially. Thus, users often experience long latency while retrieving web objects. The popularity of web objects and web sites show a considerable spatial locality that makes it possible to predict future accesses based on the previous accessed ones. This infact has motivated the researchers to devise new prefetching techniques in web so as to reduce the user perceived latency. Most of the research works are based on the standard Prediction by Partial Match model and its derivates such as the Longest Repeating Sequence and the Popularity based model that are built into Markov predictor trees using common surfing patterns. These models require lot of memory. Hence, in this paper, memory efficient Prediction by Partial Match models based on Markov model are proposed to minimize memory usage compared to the standard Prediction models and its derivatives.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130339979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Efficient focused crawling based on best first search 基于最佳第一搜索的高效集中爬行
Pub Date : 2013-05-13 DOI: 10.1109/IADCC.2013.6514347
S. Rawat, D. R. Patil
The World Wide Web continues to grow at an exponential rate, so fetching information about a special-topic is gaining importance which poses exceptional scaling challenges for general-purpose crawlers and search engines. This paper describes a web crawling approach based on best first search. As the goal of a focused crawler is to selectively seek out pages that are relevant to given keywords. Rather than collecting and indexing all available web documents to be able to answer all possible queries, a focused crawler analyze its crawl boundary to hit upon the links that are likely to be most relevant for the crawl, and avoids irrelevant links of the document. This leads to significant savings in hardware as well as network resources and also helps keep the crawl more up-to-date. To accomplish such goal-directed crawling, we select top most k relevant documents for a given query and then expand the most promising link chosen according to link score, to circumvent irrelevant regions of the web.
万维网继续以指数级速度增长,因此获取特定主题的信息变得越来越重要,这对通用爬虫和搜索引擎提出了特殊的扩展挑战。本文描述了一种基于最佳优先搜索的网络爬虫方法。因为聚焦爬虫的目标是选择性地寻找与给定关键字相关的页面。而不是收集和索引所有可用的web文档,以能够回答所有可能的查询,一个集中的爬虫分析其爬行边界,以命中最相关的链接,并避免不相关的链接的文档。这大大节省了硬件和网络资源,还有助于使抓取保持最新状态。为了完成这种目标导向的爬行,我们为给定的查询选择最前面的k个相关文档,然后根据链接得分展开选择的最有希望的链接,以绕过网络的不相关区域。
{"title":"Efficient focused crawling based on best first search","authors":"S. Rawat, D. R. Patil","doi":"10.1109/IADCC.2013.6514347","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514347","url":null,"abstract":"The World Wide Web continues to grow at an exponential rate, so fetching information about a special-topic is gaining importance which poses exceptional scaling challenges for general-purpose crawlers and search engines. This paper describes a web crawling approach based on best first search. As the goal of a focused crawler is to selectively seek out pages that are relevant to given keywords. Rather than collecting and indexing all available web documents to be able to answer all possible queries, a focused crawler analyze its crawl boundary to hit upon the links that are likely to be most relevant for the crawl, and avoids irrelevant links of the document. This leads to significant savings in hardware as well as network resources and also helps keep the crawl more up-to-date. To accomplish such goal-directed crawling, we select top most k relevant documents for a given query and then expand the most promising link chosen according to link score, to circumvent irrelevant regions of the web.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130635623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Real time RSSI error reduction in distance estimation using RLS algorithm 利用RLS算法实时减小距离估计中的RSSI误差
Pub Date : 2013-05-13 DOI: 10.1109/IADCC.2013.6514305
R. Mehra, Ashutosh Kumar Singh
Recently received signal strength (RSS)-based distance estimation technique has been proposed as a low complexity, low-cost solution for mobile communication node with minimum RSSI error. After investigating the existing algorithm of location technique, it is observed that the distribution of RSSI-value at each sample point is fluctuant even in the same position due to shadow fading effect. Therefore, here present a novel method for RSSI error reduction in distance estimation using recursive least square (RLS)-algorithm to the existing deterministic algorithms. The proposed method collects RSSI-values from the mobile communication node to build the probability model. Once the probability models are estimated for different standard deviation related to path loss exponent using adaptive filtering in real time, it is possible to accurately determine the distance between the mobile communication node and fixed communication node. From simulation results it is shown, that the accuracy of RSSI-value for mobile communication node in real time distance estimation is improved in changing environments.
近年来,基于接收信号强度(RSS)的距离估计技术作为一种低复杂度、低成本、RSSI误差最小的移动通信节点估计方法被提出。通过对现有定位技术算法的研究发现,即使在同一位置,由于阴影衰落效应,每个样本点的rssi值分布也是波动的。因此,本文提出了一种利用递推最小二乘(RLS)算法对现有确定性算法进行距离估计的RSSI误差减小的新方法。该方法收集移动通信节点的rssi值,建立概率模型。利用自适应滤波实时估计出与路径损耗指数相关的不同标准差的概率模型,就可以准确地确定移动通信节点与固定通信节点之间的距离。仿真结果表明,在变化的环境下,移动通信节点实时距离估计的rssi值精度得到了提高。
{"title":"Real time RSSI error reduction in distance estimation using RLS algorithm","authors":"R. Mehra, Ashutosh Kumar Singh","doi":"10.1109/IADCC.2013.6514305","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514305","url":null,"abstract":"Recently received signal strength (RSS)-based distance estimation technique has been proposed as a low complexity, low-cost solution for mobile communication node with minimum RSSI error. After investigating the existing algorithm of location technique, it is observed that the distribution of RSSI-value at each sample point is fluctuant even in the same position due to shadow fading effect. Therefore, here present a novel method for RSSI error reduction in distance estimation using recursive least square (RLS)-algorithm to the existing deterministic algorithms. The proposed method collects RSSI-values from the mobile communication node to build the probability model. Once the probability models are estimated for different standard deviation related to path loss exponent using adaptive filtering in real time, it is possible to accurately determine the distance between the mobile communication node and fixed communication node. From simulation results it is shown, that the accuracy of RSSI-value for mobile communication node in real time distance estimation is improved in changing environments.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132367331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
RatioRank: Enhancing the impact of inlinks and outlinks RatioRank:提高链接和外链的影响
Pub Date : 2013-05-13 DOI: 10.1109/IADCC.2013.6514328
R. Singh, D. Sharma
As web is the largest collection of information and plenty of pages or documents are newly added and deleted on frequent basis due to the dynamic nature of the web. The information present on the web is of great need, the world is full of questions and the web is serving as the major source of gaining information about specific query made by the user. As per the search engine for the query, a number of pages are retrieved among which the quality of the page that are retrieved is questioned. On the retrieved pages the search engine apply certain algorithms to bring an order to the pages retrieved so that the most relevant document or pages are displayed at the top of list. In this paper a new page ranking algorithm known as the RatioRank is discussed, in which the inlink weights and outlink weights are used with the consideration of number of visit count and is compared with some algorithms by using certain parameters.
由于web是最大的信息集合,由于web的动态性,大量的页面或文档被频繁地添加和删除。网络上的信息是非常需要的,世界上充满了问题,网络是获取用户特定查询信息的主要来源。根据查询的搜索引擎,检索到许多页面,其中检索到的页面的质量受到质疑。在检索到的页面上,搜索引擎应用某些算法对检索到的页面进行排序,以便最相关的文档或页面显示在列表的顶部。本文讨论了一种新的网页排名算法——RatioRank,该算法在考虑访问量的情况下使用了链接权值和外链权值,并通过一定的参数与一些算法进行了比较。
{"title":"RatioRank: Enhancing the impact of inlinks and outlinks","authors":"R. Singh, D. Sharma","doi":"10.1109/IADCC.2013.6514328","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514328","url":null,"abstract":"As web is the largest collection of information and plenty of pages or documents are newly added and deleted on frequent basis due to the dynamic nature of the web. The information present on the web is of great need, the world is full of questions and the web is serving as the major source of gaining information about specific query made by the user. As per the search engine for the query, a number of pages are retrieved among which the quality of the page that are retrieved is questioned. On the retrieved pages the search engine apply certain algorithms to bring an order to the pages retrieved so that the most relevant document or pages are displayed at the top of list. In this paper a new page ranking algorithm known as the RatioRank is discussed, in which the inlink weights and outlink weights are used with the consideration of number of visit count and is compared with some algorithms by using certain parameters.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132460281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Averting man in the browser attack using user-specific personal images 避免在浏览器中使用用户特定的个人图像攻击
Pub Date : 2013-05-13 DOI: 10.1109/IADCC.2013.6514413
P. Goyal, N. Bansal, N. Gupta
In the recent years, there has been a tremendous rise in online banking transactions. Some of the major factors contributing to this growth are - easy availability of higher bandwidth internet connections at low cost and ease of online transactions compared to the traditional banking methods. Also, this field has attracted the cybercriminals to take advantage of the loopholes in the process of online transaction. One of the most advanced threats today is the Man In The Browser (MitB) attack. MitB attack deploys a Trojan in the browser of the user and then steals the credentials of the user to initiate a transaction for transferring the money in a mule account. All this happens without any notice to bank or the user. MitB attack defeats the traditional two factor authentication currently used by the banks to authenticate the identity of the user performing the transaction. This paper presents a new framework for enhancing authentication during an online transaction to tackle the problem of MitB attack.
近年来,网上银行交易有了巨大的增长。促成这一增长的一些主要因素是——与传统银行方式相比,更容易以低成本获得更高带宽的互联网连接,以及更容易进行在线交易。同时,这一领域也吸引了网络犯罪分子利用网上交易过程中的漏洞。当今最先进的威胁之一是浏览器中的人(MitB)攻击。MitB攻击在用户的浏览器中部署木马,然后窃取用户的凭据,发起交易以转移骡子账户中的资金。所有这些都是在不通知银行或用户的情况下发生的。MitB攻击破坏了银行目前用于验证执行交易的用户身份的传统双因素身份验证。为了解决MitB攻击问题,本文提出了一种新的增强在线交易过程认证的框架。
{"title":"Averting man in the browser attack using user-specific personal images","authors":"P. Goyal, N. Bansal, N. Gupta","doi":"10.1109/IADCC.2013.6514413","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514413","url":null,"abstract":"In the recent years, there has been a tremendous rise in online banking transactions. Some of the major factors contributing to this growth are - easy availability of higher bandwidth internet connections at low cost and ease of online transactions compared to the traditional banking methods. Also, this field has attracted the cybercriminals to take advantage of the loopholes in the process of online transaction. One of the most advanced threats today is the Man In The Browser (MitB) attack. MitB attack deploys a Trojan in the browser of the user and then steals the credentials of the user to initiate a transaction for transferring the money in a mule account. All this happens without any notice to bank or the user. MitB attack defeats the traditional two factor authentication currently used by the banks to authenticate the identity of the user performing the transaction. This paper presents a new framework for enhancing authentication during an online transaction to tackle the problem of MitB attack.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128221195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Local entropy based brain MR image segmentation 基于局部熵的脑MR图像分割
Pub Date : 2013-05-13 DOI: 10.1109/IADCC.2013.6514403
A. Chaudhari, J. Kulkarni
Magnetic Resonance Imaging (MRI) offers a lot of information for medical examination. Fast, accurate and reproducible segmentation of MRI is desirable in many applications. Brain image segmentation is important from clinical point of view for detection of tumor. Brain images mostly contain noise, inhomogeneity and sometimes deviation. Therefore, accurate segmentation of brain images is a very difficult task. In this paper we present an automatic method of brain segmentation for detection of tumor. The MR images from T1, T2 and flair sequences are used for the study along with axial, coronal and sagitial slices. The segmentation of MR images is done using textural features based on gray level co occurrence matrix. The textural feature used is the entropy of image.
磁共振成像(MRI)为医学检查提供了很多信息。快速、准确和可重复的MRI分割在许多应用中是理想的。脑图像分割是临床诊断肿瘤的重要手段。大脑图像大多存在噪声、不均匀性,有时甚至存在偏差。因此,准确分割脑图像是一项非常困难的任务。本文提出了一种用于肿瘤检测的自动脑分割方法。T1、T2和flair序列的MR图像与轴向、冠状和矢状切片一起用于研究。利用基于灰度共生矩阵的纹理特征对MR图像进行分割。使用的纹理特征是图像的熵。
{"title":"Local entropy based brain MR image segmentation","authors":"A. Chaudhari, J. Kulkarni","doi":"10.1109/IADCC.2013.6514403","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514403","url":null,"abstract":"Magnetic Resonance Imaging (MRI) offers a lot of information for medical examination. Fast, accurate and reproducible segmentation of MRI is desirable in many applications. Brain image segmentation is important from clinical point of view for detection of tumor. Brain images mostly contain noise, inhomogeneity and sometimes deviation. Therefore, accurate segmentation of brain images is a very difficult task. In this paper we present an automatic method of brain segmentation for detection of tumor. The MR images from T1, T2 and flair sequences are used for the study along with axial, coronal and sagitial slices. The segmentation of MR images is done using textural features based on gray level co occurrence matrix. The textural feature used is the entropy of image.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134189497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Customized architecture for implementing configurable FFT on FPGA 在FPGA上实现可配置FFT的定制架构
Pub Date : 2013-05-13 DOI: 10.1109/IADCC.2013.6514412
V. Chandrakanth, S. Tripathi
Fourier transform algorithm has encompassed diverse fields of engineering including specialized fields like radars, communications and image processing systems. Therefore there have been continual efforts to improve the efficiency of FFT implementation in real time systems and other hardware. To reduce design time and time to market, FPGA vendors have developed IP cores which can be readily used in our applications. But these IP core designs though efficient are highly abstract and do not provide the designer to modify them according to his requirement which leads to inefficient design realization. Vendor provided IP cores do not give access to FFT kernel matrix thus restricting the configurability and efficiency of using them. In this paper we have designed a customized architecture to perform FFT with access to twiddle factors for improved configurability. The designed architecture is further modified to perform variable point FFT targeted for application in multirate systems. The architecture designed is generic and can be implemented on any vendor platform.
傅里叶变换算法已经涵盖了多个工程领域,包括雷达、通信和图像处理系统等专业领域。因此,人们一直在努力提高FFT在实时系统和其他硬件中的实现效率。为了缩短设计时间和上市时间,FPGA供应商开发了可以在我们的应用中轻松使用的IP内核。但是这些IP核设计虽然效率很高,但却非常抽象,不能让设计者根据自己的需求进行修改,导致设计实现效率低下。供应商提供的IP内核不允许访问FFT内核矩阵,从而限制了使用它们的可配置性和效率。在本文中,我们设计了一个定制的体系结构来执行FFT,并访问旋转因子以提高可配置性。对所设计的体系结构进行了进一步的改进,以实现针对多速率系统应用的可变点FFT。所设计的体系结构是通用的,可以在任何供应商平台上实现。
{"title":"Customized architecture for implementing configurable FFT on FPGA","authors":"V. Chandrakanth, S. Tripathi","doi":"10.1109/IADCC.2013.6514412","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514412","url":null,"abstract":"Fourier transform algorithm has encompassed diverse fields of engineering including specialized fields like radars, communications and image processing systems. Therefore there have been continual efforts to improve the efficiency of FFT implementation in real time systems and other hardware. To reduce design time and time to market, FPGA vendors have developed IP cores which can be readily used in our applications. But these IP core designs though efficient are highly abstract and do not provide the designer to modify them according to his requirement which leads to inefficient design realization. Vendor provided IP cores do not give access to FFT kernel matrix thus restricting the configurability and efficiency of using them. In this paper we have designed a customized architecture to perform FFT with access to twiddle factors for improved configurability. The designed architecture is further modified to perform variable point FFT targeted for application in multirate systems. The architecture designed is generic and can be implemented on any vendor platform.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134304984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Task allocation in a massively parallel system using Finite Automata 基于有限自动机的大规模并行系统任务分配
Pub Date : 2013-05-13 DOI: 10.1109/IADCC.2013.6514298
Zubair Khan Ravindra Singh, Sumit Sanwal, Arun Gangwar, Shabbir Alam
In this paper we are proposing a new approach for tasks allocation in a massively parallel system using Finite Automata. On the basis of task flow model of finite automata., we find the turnaround time for a parallel system using finite automata as a directed acyclic graph in the second section of the paper we discuss regarding the finite automata and directed acyclic graph after that we change finite automata into DAG for massively parallel system. All the simulations are performing in Intel C++ parallel compiler and compare these results with several interesting scheduling algorithms and we get better turnaround time.
本文提出了一种利用有限自动机进行大规模并行系统任务分配的新方法。基于有限自动机的任务流模型。在本文的第二部分,我们讨论了有限自动机和有向无环图作为并行系统的有向无环图,然后我们将有限自动机转化为大规模并行系统的DAG。所有的模拟都是在Intel c++并行编译器中进行的,并将这些结果与几种有趣的调度算法进行比较,我们得到了更好的周转时间。
{"title":"Task allocation in a massively parallel system using Finite Automata","authors":"Zubair Khan Ravindra Singh, Sumit Sanwal, Arun Gangwar, Shabbir Alam","doi":"10.1109/IADCC.2013.6514298","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514298","url":null,"abstract":"In this paper we are proposing a new approach for tasks allocation in a massively parallel system using Finite Automata. On the basis of task flow model of finite automata., we find the turnaround time for a parallel system using finite automata as a directed acyclic graph in the second section of the paper we discuss regarding the finite automata and directed acyclic graph after that we change finite automata into DAG for massively parallel system. All the simulations are performing in Intel C++ parallel compiler and compare these results with several interesting scheduling algorithms and we get better turnaround time.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134334379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discovering interesting rules from biological data using parallel genetic algorithm 利用并行遗传算法从生物数据中发现有趣的规律
Pub Date : 2013-05-13 DOI: 10.1109/IADCC.2013.6514300
S. Dash, S. Dehuri, S. Rayaguru
In this paper, a parallel genetic based association rule mining method is proposed to discover interesting rules from a large biological database. Apriori algorithms and its variants for association rule mining rely on two user specified threshold parameters such as minimum support and minimum confidence which is obviously an issue to be resolved. In addition, there are other issues like large search space and local optimality attracts many researchers to use heuristic mechanism. In the presence of large biological databases and with an aim to circumvent these problems, genetic algorithm may be taken as a suitable tool, but its computational cost is the main bottle-neck. Therefore, we choose parallel genetic algorithms to get relief from the pain of computational cost. The experimental result is promising and encouraging to do further research especially in the domain of biological science.
本文提出了一种基于并行遗传的关联规则挖掘方法,用于从大型生物数据库中发现有趣的规则。关联规则挖掘的Apriori算法及其变体依赖于两个用户指定的阈值参数,如最小支持度和最小置信度,这显然是一个有待解决的问题。此外,大搜索空间和局部最优性等问题也吸引了许多研究者使用启发式机制。在大型生物数据库存在的情况下,为了规避这些问题,遗传算法可能是一种合适的工具,但其计算成本是主要的瓶颈。因此,我们选择并行遗传算法来减轻计算成本的痛苦。实验结果对进一步的研究,特别是在生物科学领域的研究具有重要的意义。
{"title":"Discovering interesting rules from biological data using parallel genetic algorithm","authors":"S. Dash, S. Dehuri, S. Rayaguru","doi":"10.1109/IADCC.2013.6514300","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514300","url":null,"abstract":"In this paper, a parallel genetic based association rule mining method is proposed to discover interesting rules from a large biological database. Apriori algorithms and its variants for association rule mining rely on two user specified threshold parameters such as minimum support and minimum confidence which is obviously an issue to be resolved. In addition, there are other issues like large search space and local optimality attracts many researchers to use heuristic mechanism. In the presence of large biological databases and with an aim to circumvent these problems, genetic algorithm may be taken as a suitable tool, but its computational cost is the main bottle-neck. Therefore, we choose parallel genetic algorithms to get relief from the pain of computational cost. The experimental result is promising and encouraging to do further research especially in the domain of biological science.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133475894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Cluster head selection scheme for data centric wireless sensor networks 以数据为中心的无线传感器网络簇头选择方案
Pub Date : 2013-05-13 DOI: 10.1109/IADCC.2013.6514245
V. Pal, G. Singh, R. Yadav
Energy efficiency is main design issue for protocols of wireless sensor networks. Node clustering is an energy efficient approach for sensor networks. In clustering algorithms, nodes are grouped into independent clusters and each cluster has a cluster head. Data units gathered at base station depends upon lifetime of network. Cluster head selection is an important issue for energy efficiency of clustering schemes. Intra cluster communication distance depends upon position of cluster head in cluster. In this paper, a new cluster head selection scheme is proposed. Proposed scheme can be implemented with any distributed clustering scheme. In proposed scheme, network area is divided into two parts: border area and inner area. Scheme restricts cluster head selection to only inner area nodes. Scheme is implemented and simulated with LEACH in NS-2. Simulation shows that proposed scheme significantly outperform LEACH for network lifetime and data gathering rate.
能量效率是无线传感器网络协议设计的主要问题。节点聚类是传感器网络的一种节能方法。在聚类算法中,节点被分成独立的簇,每个簇有一个簇头。在基站收集的数据单位取决于网络的寿命。簇头选择是影响聚类方案能效的一个重要问题。簇内通信距离取决于簇头在簇内的位置。本文提出了一种新的簇头选择方案。该方案可以在任何分布式聚类方案中实现。在本方案中,网络区域分为边界区域和内部区域两部分。方案限制簇头选择仅在内部区域节点。在NS-2中利用LEACH对方案进行了实现和仿真。仿真结果表明,该方案在网络寿命和数据采集速率方面明显优于LEACH。
{"title":"Cluster head selection scheme for data centric wireless sensor networks","authors":"V. Pal, G. Singh, R. Yadav","doi":"10.1109/IADCC.2013.6514245","DOIUrl":"https://doi.org/10.1109/IADCC.2013.6514245","url":null,"abstract":"Energy efficiency is main design issue for protocols of wireless sensor networks. Node clustering is an energy efficient approach for sensor networks. In clustering algorithms, nodes are grouped into independent clusters and each cluster has a cluster head. Data units gathered at base station depends upon lifetime of network. Cluster head selection is an important issue for energy efficiency of clustering schemes. Intra cluster communication distance depends upon position of cluster head in cluster. In this paper, a new cluster head selection scheme is proposed. Proposed scheme can be implemented with any distributed clustering scheme. In proposed scheme, network area is divided into two parts: border area and inner area. Scheme restricts cluster head selection to only inner area nodes. Scheme is implemented and simulated with LEACH in NS-2. Simulation shows that proposed scheme significantly outperform LEACH for network lifetime and data gathering rate.","PeriodicalId":325901,"journal":{"name":"2013 3rd IEEE International Advance Computing Conference (IACC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123978514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
2013 3rd IEEE International Advance Computing Conference (IACC)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1