首页 > 最新文献

Proceedings of the International Conference on Advances in Information Communication Technology & Computing最新文献

英文 中文
Summarization Techniques of Cloud Computing 云计算总结技术
Ankita Gupta, Deepak Motwani
Text summarization is a process in which it abbreviating the source script into a short version maintaining its information content with its original meaning. It is an impossible or difficult task for human beings to summarize very large number of documents by hand. The word text summarization methods divided into two parts extractive or abstractive summarization. The extractive summarization technique extracts by selecting significant sentences, paragraphs etc from its original documents and connect them into a short form. The status of sentence is decided by sentences arithmetical and dialectal features. In other hands an abstractive method of summarization entails of understanding the unique text and re-telling it in a few words. It uses linguistic approaches to inspect and decipher the text and find the new observations and expressions to best define it by engendering a new shorter text that delivers the most meaningful facts from the original text document. A deep study of Text Summarization systems has been presented in this paper.
文本摘要是将源文本缩写成短小的文本,使其信息内容保持原意的过程。对于人类来说,手工总结大量的文档是一项不可能或困难的任务。单词文本的摘要方法分为抽取性摘要和抽象性摘要两部分。摘要技术是从原始文献中选择有意义的句子、段落等进行摘要,并将其连接成一个简短的形式。句子的地位是由句子的算术特征和方言特征决定的。另一方面,抽象的总结方法需要理解独特的文本,并用几句话重新讲述它。它使用语言学方法来检查和破译文本,并找到新的观察和表达,以最好地定义它,产生一个新的较短的文本,从原始文本文档中提供最有意义的事实。本文对文本摘要系统进行了深入的研究。
{"title":"Summarization Techniques of Cloud Computing","authors":"Ankita Gupta, Deepak Motwani","doi":"10.1145/2979779.2979845","DOIUrl":"https://doi.org/10.1145/2979779.2979845","url":null,"abstract":"Text summarization is a process in which it abbreviating the source script into a short version maintaining its information content with its original meaning. It is an impossible or difficult task for human beings to summarize very large number of documents by hand. The word text summarization methods divided into two parts extractive or abstractive summarization. The extractive summarization technique extracts by selecting significant sentences, paragraphs etc from its original documents and connect them into a short form. The status of sentence is decided by sentences arithmetical and dialectal features. In other hands an abstractive method of summarization entails of understanding the unique text and re-telling it in a few words. It uses linguistic approaches to inspect and decipher the text and find the new observations and expressions to best define it by engendering a new shorter text that delivers the most meaningful facts from the original text document. A deep study of Text Summarization systems has been presented in this paper.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126459998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Scrutable Algorithm for Enhancing the Efficiency of Recommender Systems using Fuzzy Decision Tree 一种利用模糊决策树提高推荐系统效率的可重构算法
S. Moses, L. D. D. Babu
Recommender system plays the major role of filtering the needed information from enormous amount of overloaded information. From e-commerce to movie websites, recommender systems are being used for market their product to the customer. Also, recommender system gains user trust by suggesting the customer's products of interest based on the profile of the customer and other related information. So, when the recommender system goes wrong or suggests an irrelevant product, the customer will stop trusting and using the recommender system. This kind of scenario will affect the customer as well as the e-commerce and other websites that depends on recommender systems for boosting the sales. There is a significant need to correct the recommender system when it goes wrong, since, wrong recommendations will weaken the user trust and diminish the efficiency of the system. In this paper, we are defining a scrutable algorithm for enhancing the efficiency of recommender system based on fuzzy decision tree. Scrutable algorithm will correct the system and will work on enhancing the efficiency of the recommender system. By adapting the scrutable algorithm, users will be in a position to understand the transparency in recommending items which, in turn, will gain user trust.
推荐系统的主要作用是从海量的过载信息中筛选出需要的信息。从电子商务到电影网站,推荐系统被用来向客户推销他们的产品。此外,推荐系统通过根据客户的个人资料和其他相关信息推荐客户感兴趣的产品来获得用户的信任。因此,当推荐系统出现问题或推荐不相关的产品时,客户就会停止信任和使用推荐系统。这种情况将影响客户以及电子商务和其他依赖于推荐系统来促进销售的网站。当推荐系统出现问题时,纠正它是非常必要的,因为错误的推荐会削弱用户的信任,降低系统的效率。在本文中,我们定义了一种基于模糊决策树的可解析算法来提高推荐系统的效率。可伸缩算法将对系统进行修正,并将致力于提高推荐系统的效率。通过采用可重构算法,用户将能够理解推荐项目的透明度,从而获得用户信任。
{"title":"A Scrutable Algorithm for Enhancing the Efficiency of Recommender Systems using Fuzzy Decision Tree","authors":"S. Moses, L. D. D. Babu","doi":"10.1145/2979779.2979806","DOIUrl":"https://doi.org/10.1145/2979779.2979806","url":null,"abstract":"Recommender system plays the major role of filtering the needed information from enormous amount of overloaded information. From e-commerce to movie websites, recommender systems are being used for market their product to the customer. Also, recommender system gains user trust by suggesting the customer's products of interest based on the profile of the customer and other related information. So, when the recommender system goes wrong or suggests an irrelevant product, the customer will stop trusting and using the recommender system. This kind of scenario will affect the customer as well as the e-commerce and other websites that depends on recommender systems for boosting the sales. There is a significant need to correct the recommender system when it goes wrong, since, wrong recommendations will weaken the user trust and diminish the efficiency of the system. In this paper, we are defining a scrutable algorithm for enhancing the efficiency of recommender system based on fuzzy decision tree. Scrutable algorithm will correct the system and will work on enhancing the efficiency of the recommender system. By adapting the scrutable algorithm, users will be in a position to understand the transparency in recommending items which, in turn, will gain user trust.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129769151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Feature Based Approach for Medical Databases 基于特征的医学数据库处理方法
Ritu Chauhan, Harleen Kaur, Sukrati Sharma
Medical data mining is an emerging field employed to discover hidden knowledge within the large datasets for early medical diagnosis of disease. Usually, large databases comprise of numerous features which may have missing values, noise and outliers. However, such features can mislead to future medical diagnosis. Moreover to deal with irrelevant and redundant features among large databases, proper pre processing data techniques needs be applied. In, past studies data mining technique such as feature selection is efficiently applied to deal with irrelevant, noisy and redundant features. This paper explains application of data mining techniques using feature selection for pancreatic cancer patients to conduct machine learning studies on collected patient records. We have evaluated different feature selection techniques such as Correlation-Based Filter Method (CFS) and Wrapper Subset Evaluation using Naive Bayes and J48 (an implementation of C4.5) classifier on medical databases to analyze varied data mining algorithms which can effectively classify medical data for future medical diagnosis. Further, experimental techniques have been used to measure the effectiveness and efficiency of feature selection algorithms. The experimental analysis conducted has proven beneficiary to determine machine learning methods for effective analysis of pancreatic cancer diagnosis.
医学数据挖掘是一个新兴的领域,用于发现隐藏在大数据集中的知识,用于疾病的早期医学诊断。通常,大型数据库包含许多特征,这些特征可能有缺失值、噪声和异常值。然而,这些特征可能会误导未来的医学诊断。此外,为了处理大型数据库中不相关和冗余的特征,需要采用适当的数据预处理技术。在过去的研究中,数据挖掘技术如特征选择被有效地用于处理不相关、有噪声和冗余的特征。本文介绍了数据挖掘技术在胰腺癌患者特征选择中的应用,对收集到的患者记录进行机器学习研究。我们在医疗数据库上评估了不同的特征选择技术,如基于关联的过滤方法(CFS)和使用朴素贝叶斯和J48 (C4.5的实现)分类器的包装子集评估,以分析各种数据挖掘算法,这些算法可以有效地对医疗数据进行分类,为未来的医疗诊断提供帮助。此外,还利用实验技术来衡量特征选择算法的有效性和效率。所进行的实验分析已被证明有利于确定有效分析胰腺癌诊断的机器学习方法。
{"title":"A Feature Based Approach for Medical Databases","authors":"Ritu Chauhan, Harleen Kaur, Sukrati Sharma","doi":"10.1145/2979779.2979873","DOIUrl":"https://doi.org/10.1145/2979779.2979873","url":null,"abstract":"Medical data mining is an emerging field employed to discover hidden knowledge within the large datasets for early medical diagnosis of disease. Usually, large databases comprise of numerous features which may have missing values, noise and outliers. However, such features can mislead to future medical diagnosis. Moreover to deal with irrelevant and redundant features among large databases, proper pre processing data techniques needs be applied. In, past studies data mining technique such as feature selection is efficiently applied to deal with irrelevant, noisy and redundant features. This paper explains application of data mining techniques using feature selection for pancreatic cancer patients to conduct machine learning studies on collected patient records. We have evaluated different feature selection techniques such as Correlation-Based Filter Method (CFS) and Wrapper Subset Evaluation using Naive Bayes and J48 (an implementation of C4.5) classifier on medical databases to analyze varied data mining algorithms which can effectively classify medical data for future medical diagnosis. Further, experimental techniques have been used to measure the effectiveness and efficiency of feature selection algorithms. The experimental analysis conducted has proven beneficiary to determine machine learning methods for effective analysis of pancreatic cancer diagnosis.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130534088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Novel Technique of implementing Bidirectional Point Coordination Function for Voice Traffic in WLAN 一种实现WLAN话音业务双向点协调功能的新技术
Himanshu Yadav, D. Dembla
WLANs have substituted wired networks and have formed an uprising in the area of communication. The Point Coordination Function (PCF) of the IEEE 802.11 protocol offersto sustain to real time traffic for Wireless Local Area Network (WLANs). However, PCF being a central polling protocol, a little bandwidth is exhausted on null packets and polling operating cost. To supplyimproved channel operation and to decreaseconsumption of bandwidth, an enhanced bidirectional broadcast of setextent known as Bi-Directional Point coordination Function (BD-PCF) is included into PCF. Based on this policy, wireless Access Points (APs) can approximatesuitablelength of the contention free period by conceding any received packet equal to the received packet in size. But as only one packet is transported during communication, the constraints of quality of service needed an enhancement. So a novel procedure, is projected in which two packets are conveyed in same interval with the AP, can resolve its wake-up timer and trigger sleep mode for the remaining of the Contention Free Period (CFP). Extensive computer based models of the new proposed method is established for the upgrading in terms of throughput and delay in voice influx.
无线局域网取代了有线网络,在通信领域掀起了一股浪潮。IEEE 802.11协议的点协调功能(PCF)为无线局域网(wlan)提供了实时流量的支持。然而,PCF作为一种中心轮询协议,会在空数据包和轮询操作成本上耗尽少量带宽。为了提供改进的信道操作和减少带宽消耗,一种被称为双向点协调函数(BD-PCF)的增强双向广播内容被包含在PCF中。基于此策略,无线接入点(ap)可以通过拒绝接收大小相等的任何数据包来近似地确定无争用周期的长度。但由于在通信过程中只传输一个数据包,因此需要增强对服务质量的约束。因此,提出了一种新的过程,其中两个数据包以相同的间隔与AP传输,可以解决其唤醒定时器并在剩余的争用自由周期(CFP)中触发睡眠模式。为了提高语音流的吞吐量和延迟,建立了广泛的基于计算机的新方法模型。
{"title":"A Novel Technique of implementing Bidirectional Point Coordination Function for Voice Traffic in WLAN","authors":"Himanshu Yadav, D. Dembla","doi":"10.1145/2979779.2979864","DOIUrl":"https://doi.org/10.1145/2979779.2979864","url":null,"abstract":"WLANs have substituted wired networks and have formed an uprising in the area of communication. The Point Coordination Function (PCF) of the IEEE 802.11 protocol offersto sustain to real time traffic for Wireless Local Area Network (WLANs). However, PCF being a central polling protocol, a little bandwidth is exhausted on null packets and polling operating cost. To supplyimproved channel operation and to decreaseconsumption of bandwidth, an enhanced bidirectional broadcast of setextent known as Bi-Directional Point coordination Function (BD-PCF) is included into PCF. Based on this policy, wireless Access Points (APs) can approximatesuitablelength of the contention free period by conceding any received packet equal to the received packet in size. But as only one packet is transported during communication, the constraints of quality of service needed an enhancement. So a novel procedure, is projected in which two packets are conveyed in same interval with the AP, can resolve its wake-up timer and trigger sleep mode for the remaining of the Contention Free Period (CFP). Extensive computer based models of the new proposed method is established for the upgrading in terms of throughput and delay in voice influx.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133773705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Synthetic Minority Oversampling Technique and Subset Feature Selection Technique For Class Imbalance Problem 结合合成少数派过采样技术和子集特征选择技术的类不平衡问题
Pawan Lachheta, S. Bawa
Building an effective classification model when the high dimensional data is suffering from class imbalance problem is a major challenge. The problem is severe when negative samples have large percentages than positive samples. To surmount the class imbalance and high dimensionality issues in the dataset, we propose a SFS framework that comprises of SMOTE filters, which are used for balancing the datasets, as well as feature ranker for pre-processing of data. The framework is developed using R language and various R packages. Then the performance of SFS framework is evaluated and found that proposed framework outperforms than other state-of-the-art methods.
当高维数据存在类不平衡问题时,如何建立有效的分类模型是一个重大的挑战。当阴性样本的百分比大于阳性样本时,问题就很严重了。为了克服数据集中的类不平衡和高维问题,我们提出了一个SFS框架,该框架包括用于平衡数据集的SMOTE过滤器,以及用于数据预处理的特征排序器。该框架是使用R语言和各种R包开发的。然后对SFS框架的性能进行了评估,发现所提出的框架优于其他最先进的方法。
{"title":"Combining Synthetic Minority Oversampling Technique and Subset Feature Selection Technique For Class Imbalance Problem","authors":"Pawan Lachheta, S. Bawa","doi":"10.1145/2979779.2979804","DOIUrl":"https://doi.org/10.1145/2979779.2979804","url":null,"abstract":"Building an effective classification model when the high dimensional data is suffering from class imbalance problem is a major challenge. The problem is severe when negative samples have large percentages than positive samples. To surmount the class imbalance and high dimensionality issues in the dataset, we propose a SFS framework that comprises of SMOTE filters, which are used for balancing the datasets, as well as feature ranker for pre-processing of data. The framework is developed using R language and various R packages. Then the performance of SFS framework is evaluated and found that proposed framework outperforms than other state-of-the-art methods.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134043060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Performance Analysis of Cache Coherence Protocols for Multi-core Architectures: A System Attribute Perspective 多核架构下缓存一致性协议的性能分析:系统属性视角
Amit D. Joshi, Satyanarayana Vollala, B. S. Begum, N. Ramasubramanian
Shared memory multi-core processors are becoming dominant in todays computer architectures. Caching of shared data may produce a problem of replication in multiple caches. Replication provides reduction in contention for shared data items along with reduction in access latency and memory bandwidth. Caching of shared data that are being read by multiple processors simultaneously, introduces the problem of cache coherence. There are two different techniques to track the sharing status viz. Directory and Snooping. This work gives an emphasis on the study and analysis of impact of various system parameters on the performance of the basic techniques. The performance analysis of this work is based on the number of processors, available bandwidth and cache size. The prime aim of this work is to identify appropriate cache coherence protocol for various configurations. Simulation results have shown that snooping based systems are appropriate for high bandwidth systems and is the ideal choice for CPU and communication intensive workloads while directory based cache coherence protocols are suitable for lower bandwidth systems and will be more appropriate for memory intensive workloads.
共享内存多核处理器在当今的计算机体系结构中占据主导地位。共享数据的缓存可能会在多个缓存中产生复制问题。复制减少了共享数据项的争用,同时减少了访问延迟和内存带宽。缓存由多个处理器同时读取的共享数据,引入了缓存一致性的问题。有两种不同的技术来跟踪共享状态,即目录和窥探。本工作着重研究和分析了各种系统参数对基本技术性能的影响。这项工作的性能分析是基于处理器的数量、可用带宽和缓存大小。这项工作的主要目的是为各种配置确定适当的缓存一致性协议。仿真结果表明,基于窥探的系统适用于高带宽系统,是CPU和通信密集型工作负载的理想选择,而基于目录的缓存一致性协议适用于低带宽系统,更适用于内存密集型工作负载。
{"title":"Performance Analysis of Cache Coherence Protocols for Multi-core Architectures: A System Attribute Perspective","authors":"Amit D. Joshi, Satyanarayana Vollala, B. S. Begum, N. Ramasubramanian","doi":"10.1145/2979779.2979801","DOIUrl":"https://doi.org/10.1145/2979779.2979801","url":null,"abstract":"Shared memory multi-core processors are becoming dominant in todays computer architectures. Caching of shared data may produce a problem of replication in multiple caches. Replication provides reduction in contention for shared data items along with reduction in access latency and memory bandwidth. Caching of shared data that are being read by multiple processors simultaneously, introduces the problem of cache coherence. There are two different techniques to track the sharing status viz. Directory and Snooping. This work gives an emphasis on the study and analysis of impact of various system parameters on the performance of the basic techniques. The performance analysis of this work is based on the number of processors, available bandwidth and cache size. The prime aim of this work is to identify appropriate cache coherence protocol for various configurations. Simulation results have shown that snooping based systems are appropriate for high bandwidth systems and is the ideal choice for CPU and communication intensive workloads while directory based cache coherence protocols are suitable for lower bandwidth systems and will be more appropriate for memory intensive workloads.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130938513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identification and ranking of key persons in a Social Networking Website using Hadoop & Big Data Analytics 使用Hadoop和大数据分析对社交网站中的关键人物进行识别和排名
Prerna Agarwal, Rafeeq Ahmed, Tanvir Ahmad
Big Data is a term which defines a vast amount of structured and unstructured data which is challenging to process because of its large size, using traditional algorithms and lack of high speed processing techniques. Now a days, vast amount of digital data is being gathered from many important areas, including social networking websites like Facebook and Twitter. It is important for us to mine this big data for analysis purpose. One important analysis in this domain is to find key nodes in a social graph which can be the major information spreader. Node centrality measures can be used in many graph applications such as searching and ranking of nodes. Traditional centrality algorithms fail on such huge graphs therefore it is difficult to use these algorithms on big graphs. Traditional centrality algorithms such as degree centrality, betweenness centrality and closeness centrality were not designed for such large data. In this paper, we calculate centrality measures for big graphs having huge number of edges and nodes by parallelizing traditional centrality algorithms so that they can be used in an efficient way when the size of graph grows. We use MapReduce and Hadoop to implement these algorithms for parallel and distributed data processing. We present results and anomalies of these algorithms and also show the comparative processing time taken on normal systems and on Hadoop systems.
大数据是一个定义大量结构化和非结构化数据的术语,由于其规模大,使用传统算法和缺乏高速处理技术,因此具有挑战性。如今,大量的数字数据正在从许多重要领域收集,包括像Facebook和Twitter这样的社交网站。对我们来说,挖掘这些大数据用于分析是很重要的。该领域的一个重要分析是在社交图中找到关键节点,这些节点可能是主要的信息传播者。节点中心性度量可用于许多图应用程序,如节点的搜索和排序。传统的中心性算法在如此庞大的图上无法应用,因此很难在大图上应用。传统的中心性算法,如度中心性、中间中心性和接近中心性等,并没有针对如此大的数据进行设计。本文通过并行化传统的中心性算法来计算具有大量边和节点的大图的中心性度量,以便在图的规模增长时能够有效地使用它们。我们使用MapReduce和Hadoop来实现这些算法,以实现并行和分布式数据处理。我们给出了这些算法的结果和异常情况,并展示了在正常系统和Hadoop系统上的比较处理时间。
{"title":"Identification and ranking of key persons in a Social Networking Website using Hadoop & Big Data Analytics","authors":"Prerna Agarwal, Rafeeq Ahmed, Tanvir Ahmad","doi":"10.1145/2979779.2979844","DOIUrl":"https://doi.org/10.1145/2979779.2979844","url":null,"abstract":"Big Data is a term which defines a vast amount of structured and unstructured data which is challenging to process because of its large size, using traditional algorithms and lack of high speed processing techniques. Now a days, vast amount of digital data is being gathered from many important areas, including social networking websites like Facebook and Twitter. It is important for us to mine this big data for analysis purpose. One important analysis in this domain is to find key nodes in a social graph which can be the major information spreader. Node centrality measures can be used in many graph applications such as searching and ranking of nodes. Traditional centrality algorithms fail on such huge graphs therefore it is difficult to use these algorithms on big graphs. Traditional centrality algorithms such as degree centrality, betweenness centrality and closeness centrality were not designed for such large data. In this paper, we calculate centrality measures for big graphs having huge number of edges and nodes by parallelizing traditional centrality algorithms so that they can be used in an efficient way when the size of graph grows. We use MapReduce and Hadoop to implement these algorithms for parallel and distributed data processing. We present results and anomalies of these algorithms and also show the comparative processing time taken on normal systems and on Hadoop systems.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132280278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Novel Scheme for Data Security in Cloud Computing using Quantum Cryptography 一种基于量子密码的云计算数据安全新方案
Geeta Sharma, S. Kalra
Cloud computing manifests exceptional capacity to facilitate easy to manage, cost effective, flexible and powerful resources across the internet. Due to maximum and shared utilization of utilization of resources, cloud computing enhances the capabilities of resources. There is a dire need for data security in the wake of the increasing capabilities of attackers and high magnitude of sensitive data. Cryptography is employed to ensure secrecy and authentication of data. Conventional information assurance methods are facing increasing technological advances such as radical developments in mathematics, potential to perform big computations and the prospects of wide-ranging quantum computations. Quantum cryptography is a promising solution towards absolute security in cryptosystems. This paper proposes integration of Advanced Encryption Standard (AES) algorithm with quantum cryptography. The proposed scheme is robust and meets essential security requirements. The simulation results show that the Quantum AES produces complex keys which are hard to predict by adversaries than the keys generated by the AES itself.
云计算显示了在互联网上易于管理、具有成本效益、灵活和强大的资源方面的卓越能力。云计算通过对资源的最大化利用和共享利用,增强了资源的能力。随着攻击者能力的增加和敏感数据的大量增加,对数据安全的需求非常迫切。采用密码学来保证数据的保密性和认证。传统的信息保障方法正面临着越来越多的技术进步,如数学的激进发展,执行大型计算的潜力和广泛的量子计算的前景。量子密码学是实现密码系统绝对安全的一种很有前途的解决方案。提出了将高级加密标准(AES)算法与量子密码学相结合的方案。该方案鲁棒性好,满足基本的安全要求。仿真结果表明,量子AES生成的密钥比AES本身生成的密钥更难被攻击者预测。
{"title":"A Novel Scheme for Data Security in Cloud Computing using Quantum Cryptography","authors":"Geeta Sharma, S. Kalra","doi":"10.1145/2979779.2979816","DOIUrl":"https://doi.org/10.1145/2979779.2979816","url":null,"abstract":"Cloud computing manifests exceptional capacity to facilitate easy to manage, cost effective, flexible and powerful resources across the internet. Due to maximum and shared utilization of utilization of resources, cloud computing enhances the capabilities of resources. There is a dire need for data security in the wake of the increasing capabilities of attackers and high magnitude of sensitive data. Cryptography is employed to ensure secrecy and authentication of data. Conventional information assurance methods are facing increasing technological advances such as radical developments in mathematics, potential to perform big computations and the prospects of wide-ranging quantum computations. Quantum cryptography is a promising solution towards absolute security in cryptosystems. This paper proposes integration of Advanced Encryption Standard (AES) algorithm with quantum cryptography. The proposed scheme is robust and meets essential security requirements. The simulation results show that the Quantum AES produces complex keys which are hard to predict by adversaries than the keys generated by the AES itself.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134181068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
IMSS-E: An Intelligent Approach to Design of Adaptive Meta Search System for E Commerce Website Ranking 电子商务网站排名自适应元搜索系统的智能设计方法
Dheeraj Malhotra, O. Rishi
With the continuous increase in frequent E Commerce users, online businesses must have more customer friendly websites to better satisfy the personalized requirements of online customer and hence improve their market share over competition; Different customers have different purchase requirements at different intervals of time and hence new strategies are often required to be deployed by online retailers in order to identify the current purchase requirements of customer. In this research work, we propose design of a tool called Intelligent Meta Search System for E-commerce (IMSS-E), which can be used to blend benefits of Apriori based Map Reduce framework supported by Intelligent technologies like back propagation neural network and semantic web with B2C E-commerce to assist the online user to easily search and rank various E Commerce websites which can better satisfy his/her personalized online purchase requirement. An extensive experimental evaluation shows that IMSS-E can better satisfy the personalized search requirements of E Commerce users than conventional meta search engines.
随着电子商务频繁用户的不断增加,网上商家必须有更多的客户友好型网站,以更好地满足网上客户的个性化需求,从而提高其在竞争中的市场份额;不同的客户在不同的时间间隔有不同的购买需求,因此在线零售商经常需要部署新的策略来识别客户当前的购买需求。在本研究工作中,我们提出了一个电子商务智能元搜索系统(Intelligent Meta Search System for ecommerce, IMSS-E)的设计工具,该工具可以将基于Apriori的Map Reduce框架的优点与B2C电子商务相结合,并结合反向传播神经网络和语义网等智能技术,帮助在线用户方便地搜索和排名各种电子商务网站,以更好地满足其个性化的在线购买需求。大量的实验评估表明,与传统元搜索引擎相比,IMSS-E能更好地满足电子商务用户的个性化搜索需求。
{"title":"IMSS-E: An Intelligent Approach to Design of Adaptive Meta Search System for E Commerce Website Ranking","authors":"Dheeraj Malhotra, O. Rishi","doi":"10.1145/2979779.2979782","DOIUrl":"https://doi.org/10.1145/2979779.2979782","url":null,"abstract":"With the continuous increase in frequent E Commerce users, online businesses must have more customer friendly websites to better satisfy the personalized requirements of online customer and hence improve their market share over competition; Different customers have different purchase requirements at different intervals of time and hence new strategies are often required to be deployed by online retailers in order to identify the current purchase requirements of customer. In this research work, we propose design of a tool called Intelligent Meta Search System for E-commerce (IMSS-E), which can be used to blend benefits of Apriori based Map Reduce framework supported by Intelligent technologies like back propagation neural network and semantic web with B2C E-commerce to assist the online user to easily search and rank various E Commerce websites which can better satisfy his/her personalized online purchase requirement. An extensive experimental evaluation shows that IMSS-E can better satisfy the personalized search requirements of E Commerce users than conventional meta search engines.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"35 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133557115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Classification of Plant Leaf Diseases Using Gradient and Texture Feature 基于梯度和纹理特征的植物叶片病害分类
R. Kaur, Sanjay Singla
This paper presents a new technique of classification of Plant Leaf Disease (Potato Late Blight) using gradient and texture features and Artificial Neural Networks. This technique uses Artificial Neural Networks to Segment an Image which is initially segmented using unsupervised Fuzzy-C-means Clustering Algorithm. In this proposed approach decorrelation extending is utilized to enhance the shading contrasts as a part of the information pictures. At that point Fuzzy C-mean bunching is connected to portion the sickness influenced region which additionally incorporate foundation with same shading attributes. At last we propose to utilize the neural system based way to deal with group the malady influenced locales from the comparable shading textured foundation. The results of our work are promising.
提出了一种利用梯度特征、纹理特征和人工神经网络对马铃薯晚疫病进行分类的新方法。该技术使用人工神经网络对图像进行分割,该图像最初使用无监督模糊c均值聚类算法进行分割。在该方法中,利用去相关扩展来增强作为信息图像一部分的阴影对比度。此时,模糊c均值聚类连接到部分疾病影响区域,该区域另外包含具有相同阴影属性的基础。最后,我们提出了利用基于神经系统的方法,从可比阴影纹理基础上对疾病影响区域进行分组处理。我们工作的结果是有希望的。
{"title":"Classification of Plant Leaf Diseases Using Gradient and Texture Feature","authors":"R. Kaur, Sanjay Singla","doi":"10.1145/2979779.2979875","DOIUrl":"https://doi.org/10.1145/2979779.2979875","url":null,"abstract":"This paper presents a new technique of classification of Plant Leaf Disease (Potato Late Blight) using gradient and texture features and Artificial Neural Networks. This technique uses Artificial Neural Networks to Segment an Image which is initially segmented using unsupervised Fuzzy-C-means Clustering Algorithm. In this proposed approach decorrelation extending is utilized to enhance the shading contrasts as a part of the information pictures. At that point Fuzzy C-mean bunching is connected to portion the sickness influenced region which additionally incorporate foundation with same shading attributes. At last we propose to utilize the neural system based way to deal with group the malady influenced locales from the comparable shading textured foundation. The results of our work are promising.","PeriodicalId":298730,"journal":{"name":"Proceedings of the International Conference on Advances in Information Communication Technology & Computing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121330007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
Proceedings of the International Conference on Advances in Information Communication Technology & Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1