首页 > 最新文献

2017 Tenth International Conference on Contemporary Computing (IC3)最新文献

英文 中文
A fast GPU algorithm for biconnected components 双连接组件的快速GPU算法
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284293
Mihir Wadwekar, Kishore Kothapalli
Finding the articulation points and the biconnected components of an undirected graph has been a problem of huge interest in graph theory. Over the years, several sequential and parallel algorithms have been presented for this problem. Our paper here presents and implements a fast parallel algorithm on GPU which is to the best of our knowledge the first such attempt and also the fastest implementation across architectures. The implementation is on an average 4x faster then the next best implementation. We also apply an edge-pruning technique which results in a further 2x speedup for dense graphs.
寻找无向图的连接点和双连通分量一直是图论研究的热点问题。多年来,已经提出了几种顺序和并行算法来解决这个问题。我们在这里提出并实现了一个GPU上的快速并行算法,据我们所知,这是第一次这样的尝试,也是跨架构最快的实现。该实现比次优实现平均快4倍。我们还应用了一种边缘修剪技术,使密集图的速度提高了2倍。
{"title":"A fast GPU algorithm for biconnected components","authors":"Mihir Wadwekar, Kishore Kothapalli","doi":"10.1109/IC3.2017.8284293","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284293","url":null,"abstract":"Finding the articulation points and the biconnected components of an undirected graph has been a problem of huge interest in graph theory. Over the years, several sequential and parallel algorithms have been presented for this problem. Our paper here presents and implements a fast parallel algorithm on GPU which is to the best of our knowledge the first such attempt and also the fastest implementation across architectures. The implementation is on an average 4x faster then the next best implementation. We also apply an edge-pruning technique which results in a further 2x speedup for dense graphs.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121742645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Clickstream & behavioral analysis with context awareness for e-commercial applications 点击流和行为分析与上下文意识的电子商务应用程序
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284328
Sakshi Bansal, Chetna Gupta, Adwitiya Sinha
The widespread eminence of internetworking plays a vital role in revolutionizing the commercial domain. Prior to make purchases, people spend a lot of time on the internet to gather information and feedback, so as to better direct their decisions. Since, customers are not present in person in the stores they can easily shift from one supplier to the other via online portals. From business point of view, this could be harmful for the profit and propaganda of traders. In this research, the prime focus is to overcome such problems confronted by trading industries. Our proposed model analyses ‘Clickstream Events’ of online users along with set of contextual details for providing better recommendations. Recording the behavior of several users can help industries to discover habits and tendencies of user, which can lead to even better and effective decisions to improve business profit and its market coverage. Further, our proposed model aims at discovering the relationship between various items, from the context of user interest. Market Basket Analysis is performed that assists the user with appropriate options while purchasing products along with items already purchased, thereby offering better buying experience.
互联网的广泛普及对商业领域的变革起着至关重要的作用。在购买之前,人们会花很多时间在网上收集信息和反馈,以便更好地指导他们的决定。由于顾客并不亲自出现在商店中,他们可以很容易地通过在线门户从一个供应商转移到另一个供应商。从商业的角度来看,这可能对贸易商的利润和宣传有害。在本研究中,主要关注的是如何克服贸易行业所面临的这些问题。我们提出的模型分析在线用户的“点击流事件”以及一组上下文细节,以提供更好的推荐。记录多个用户的行为可以帮助行业发现用户的习惯和倾向,从而做出更好、更有效的决策,从而提高企业利润和市场覆盖率。此外,我们提出的模型旨在从用户兴趣的上下文中发现各种项目之间的关系。执行购物篮分析,帮助用户在购买产品和已经购买的物品时进行适当的选择,从而提供更好的购买体验。
{"title":"Clickstream & behavioral analysis with context awareness for e-commercial applications","authors":"Sakshi Bansal, Chetna Gupta, Adwitiya Sinha","doi":"10.1109/IC3.2017.8284328","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284328","url":null,"abstract":"The widespread eminence of internetworking plays a vital role in revolutionizing the commercial domain. Prior to make purchases, people spend a lot of time on the internet to gather information and feedback, so as to better direct their decisions. Since, customers are not present in person in the stores they can easily shift from one supplier to the other via online portals. From business point of view, this could be harmful for the profit and propaganda of traders. In this research, the prime focus is to overcome such problems confronted by trading industries. Our proposed model analyses ‘Clickstream Events’ of online users along with set of contextual details for providing better recommendations. Recording the behavior of several users can help industries to discover habits and tendencies of user, which can lead to even better and effective decisions to improve business profit and its market coverage. Further, our proposed model aims at discovering the relationship between various items, from the context of user interest. Market Basket Analysis is performed that assists the user with appropriate options while purchasing products along with items already purchased, thereby offering better buying experience.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114306601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A study on the minimum dominating set problem approximation in parallel 最小支配集问题并行逼近的研究
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284287
Mahak Gambhir, Kishore Kothapalli
A dominating set of a small size is useful in several settings including wireless networks, document summarization, secure system design, and the like. In this paper, we start by studying three distributed algorithms that produce a small sized dominating sets in a few rounds. We interpret these algorithms in the natural shared memory setting and experiment with these algorithms on a multi-core CPU. Based on the observations from these experimental results, we propose variations to the three algorithms and also show how the proposed variations offer interesting trade-offs with respect to the size of the dominating set produced and the time taken.
小尺寸的支配集在无线网络、文档摘要、安全系统设计等多种设置中都很有用。在本文中,我们首先研究了三种分布式算法,它们在几轮内产生一个小的控制集。我们在自然共享内存设置中解释这些算法,并在多核CPU上对这些算法进行实验。基于这些实验结果的观察,我们提出了三种算法的变化,并展示了所提出的变化如何在产生的支配集的大小和所花费的时间方面提供有趣的权衡。
{"title":"A study on the minimum dominating set problem approximation in parallel","authors":"Mahak Gambhir, Kishore Kothapalli","doi":"10.1109/IC3.2017.8284287","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284287","url":null,"abstract":"A dominating set of a small size is useful in several settings including wireless networks, document summarization, secure system design, and the like. In this paper, we start by studying three distributed algorithms that produce a small sized dominating sets in a few rounds. We interpret these algorithms in the natural shared memory setting and experiment with these algorithms on a multi-core CPU. Based on the observations from these experimental results, we propose variations to the three algorithms and also show how the proposed variations offer interesting trade-offs with respect to the size of the dominating set produced and the time taken.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126855198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Designing energy efficient traveling paths for multiple mobile chargers in wireless rechargeable sensor networks 无线可充电传感器网络中多个移动充电器的节能路径设计
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284332
Abhinav Tomar, P. K. Jana
Mobile charging is an important topic for wireless rechargeable sensor networks (WRSNs) which has been studied extensively over the past few years. With the help of wireless energy transfer (WET), it is now possible to extend lifetime of sensor nodes to a longer period. In WET, wireless charging vehicle (WCV) moves on its designed traveling path in the network and charges sensor nodes by halting at some stopping locations. However, large scale WRSNs demand for multiple WCVs to make mobile charging more feasible. It is to be noted that for designing energy efficient traveling paths, it is desirable to minimize the stopping locations of the WCV which improves charging efficiency in the noticeable amount. Moreover, charging multiple nodes at the same time improves charging performance and thus it is recent trend. Inspired by the above facts, in this paper, we serve the objective of designing energy efficient traveling paths for multiple WCVs with multi-node charging. Our proposed scheme works in two phases. In the first phase, we perform clustering to divide the network region into charging subregions according to available number of WCVs. In the second phase, we apply charging radius based nearest neighbor approach (CR-NN) to find anchor points (i.e., stopping locations) for those WCVs to design traveling paths. The simulation results confirm the effectiveness of our scheme and demonstrate performance gains with respect to several metrics such as charging latency, waiting time, node failure rate, and so on.
移动充电是无线可充电传感器网络(WRSNs)的一个重要课题,近年来得到了广泛的研究。在无线能量传输(WET)的帮助下,现在可以将传感器节点的寿命延长到更长的时间。在无线充电网络中,无线充电车辆(WCV)在网络中按照设计的行驶路径移动,并在一些停车点停车,对传感器节点进行充电。然而,为了使移动充电更加可行,大规模的wsn需要多个wcv。值得注意的是,为了设计节能的行驶路径,希望最小化WCV的停车位置,从而显著提高充电效率。同时为多个节点充电可以提高充电性能,是近期发展趋势。受上述事实的启发,本文的目标是设计具有多节点充电的多wcv的节能行驶路径。我们提出的方案分两个阶段进行。在第一阶段,我们根据可用的wcv数进行聚类,将网络区域划分为收费子区域。在第二阶段,我们应用基于充电半径的最近邻方法(CR-NN)为这些wcv寻找锚点(即停车位置)来设计行驶路径。仿真结果证实了我们的方案的有效性,并展示了在充电延迟、等待时间、节点故障率等几个指标方面的性能提升。
{"title":"Designing energy efficient traveling paths for multiple mobile chargers in wireless rechargeable sensor networks","authors":"Abhinav Tomar, P. K. Jana","doi":"10.1109/IC3.2017.8284332","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284332","url":null,"abstract":"Mobile charging is an important topic for wireless rechargeable sensor networks (WRSNs) which has been studied extensively over the past few years. With the help of wireless energy transfer (WET), it is now possible to extend lifetime of sensor nodes to a longer period. In WET, wireless charging vehicle (WCV) moves on its designed traveling path in the network and charges sensor nodes by halting at some stopping locations. However, large scale WRSNs demand for multiple WCVs to make mobile charging more feasible. It is to be noted that for designing energy efficient traveling paths, it is desirable to minimize the stopping locations of the WCV which improves charging efficiency in the noticeable amount. Moreover, charging multiple nodes at the same time improves charging performance and thus it is recent trend. Inspired by the above facts, in this paper, we serve the objective of designing energy efficient traveling paths for multiple WCVs with multi-node charging. Our proposed scheme works in two phases. In the first phase, we perform clustering to divide the network region into charging subregions according to available number of WCVs. In the second phase, we apply charging radius based nearest neighbor approach (CR-NN) to find anchor points (i.e., stopping locations) for those WCVs to design traveling paths. The simulation results confirm the effectiveness of our scheme and demonstrate performance gains with respect to several metrics such as charging latency, waiting time, node failure rate, and so on.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133348070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A frequent itemset reduction algorithm for global pattern mining on distributed data streams 分布式数据流上全局模式挖掘的频繁项集约简算法
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284320
Shalini, Sanjay Kumar Jain
In present scenario, extracting global frequent itemsets from big data, distributed across multiple data streams, with its real time requirements is a complex problem. In this article, we propose an algorithm that reduces number of local frequent itemsets communicated to root node to extract global patterns from distributed multiple data streams. Here, the algorithm sends only local frequent itemsets to the root node instead of sending summary of local data streams. We compress sets of local frequent itemsets and send them to the root node using algorithm called Frequent Itemset Reduction (FIR) algorithm. We present two indexing structures known as I-list and Modified Seg-tree (MsegT) to store all local frequent itemsets at root node. Our experimental study exhibits that the FIR algorithm reduces communication cost in a good extent and MsegT produces substantial good results compared to I-list and few state-of-the-art techniques.
在目前的场景中,从分布在多个数据流中的大数据中提取全局频繁项集是一个非常复杂的问题。在本文中,我们提出了一种减少与根节点通信的局部频繁项集数量的算法,以从分布式多数据流中提取全局模式。这里,算法只向根节点发送本地频繁项集,而不发送本地数据流的摘要。我们使用频繁项集约简(FIR)算法压缩本地频繁项集集并将其发送到根节点。我们提出了I-list和Modified Seg-tree (MsegT)两种索引结构,将所有本地频繁项集存储在根节点。我们的实验研究表明,与I-list和一些最先进的技术相比,FIR算法在很大程度上降低了通信成本,MsegT产生了相当好的结果。
{"title":"A frequent itemset reduction algorithm for global pattern mining on distributed data streams","authors":"Shalini, Sanjay Kumar Jain","doi":"10.1109/IC3.2017.8284320","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284320","url":null,"abstract":"In present scenario, extracting global frequent itemsets from big data, distributed across multiple data streams, with its real time requirements is a complex problem. In this article, we propose an algorithm that reduces number of local frequent itemsets communicated to root node to extract global patterns from distributed multiple data streams. Here, the algorithm sends only local frequent itemsets to the root node instead of sending summary of local data streams. We compress sets of local frequent itemsets and send them to the root node using algorithm called Frequent Itemset Reduction (FIR) algorithm. We present two indexing structures known as I-list and Modified Seg-tree (MsegT) to store all local frequent itemsets at root node. Our experimental study exhibits that the FIR algorithm reduces communication cost in a good extent and MsegT produces substantial good results compared to I-list and few state-of-the-art techniques.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132230189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Normalized videosnapping: A non-linear video synchronization approach 规范化视频捕捉:非线性视频同步方法
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284325
Ankit Tripathi, Benu Changmai, Shrukul Habib, Nagaratna B. Chittaragi, S. Koolagudi
Video synchronization is the task of content-based alignment of two or more videos depicting the same event with spatial variations or in the same object with temporal changes. Video synchronization is one of the most fundamental tasks when it comes to manipulations with temporally or spatially multi-perspective video-shots. In this paper, a model is proposed to deal with the synchronization problem and efficiently tackles issues arising during synchronizing two videos. Here, videos are dealt, at the frame level with features from each frame forming the basis of alignment. Features are matched and mapped to generate a cost matrix of similarities among the frames of the videos in concern. A modified version of Djikstra's algorithm that yields an optimal path through the matrix is applied. Through an optimal path, events are grouped into adjacent regions following which temporal warpings are introduced into the videos to achieve the best possible alignment among them. The model has proven to be efficient and compatible with all classes of quality levels of videos.
视频同步是基于内容的两个或多个视频对齐的任务,这些视频描述了具有空间变化的同一事件或具有时间变化的同一对象。视频同步是处理时间或空间多视角视频的最基本任务之一。本文提出了一个处理同步问题的模型,有效地解决了两个视频同步过程中出现的问题。在这里,视频处理,在帧级与特征从每一帧形成对齐的基础。将特征进行匹配和映射,生成相关视频帧之间的相似度代价矩阵。一个修改版本的Djikstra的算法,产生最优路径通过矩阵应用。通过最优路径,事件被分组到相邻区域,随后在视频中引入时间扭曲,以实现它们之间的最佳对齐。该模型已被证明是有效的,并与所有类别的质量水平的视频兼容。
{"title":"Normalized videosnapping: A non-linear video synchronization approach","authors":"Ankit Tripathi, Benu Changmai, Shrukul Habib, Nagaratna B. Chittaragi, S. Koolagudi","doi":"10.1109/IC3.2017.8284325","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284325","url":null,"abstract":"Video synchronization is the task of content-based alignment of two or more videos depicting the same event with spatial variations or in the same object with temporal changes. Video synchronization is one of the most fundamental tasks when it comes to manipulations with temporally or spatially multi-perspective video-shots. In this paper, a model is proposed to deal with the synchronization problem and efficiently tackles issues arising during synchronizing two videos. Here, videos are dealt, at the frame level with features from each frame forming the basis of alignment. Features are matched and mapped to generate a cost matrix of similarities among the frames of the videos in concern. A modified version of Djikstra's algorithm that yields an optimal path through the matrix is applied. Through an optimal path, events are grouped into adjacent regions following which temporal warpings are introduced into the videos to achieve the best possible alignment among them. The model has proven to be efficient and compatible with all classes of quality levels of videos.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127247242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CRUISE: A platform for crowdsourcing Requirements Elicitation and evolution CRUISE:众包需求引出和进化的平台
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284308
Richa Sharma, A. Sureka
Crowdsourcing has aroused a lot of interest in Requirements Engineering (RE) research community. RE activities are inherently complex in nature — both effort and time intensive, and quite dependent on each-other. The potential of crowdsourcing has been acknowledged for addressing complex tasks in general. We intend to study the potential of crowdsourcing for a broad spectrum of RE activities, i.e. from gathering requirements to their validation, through our proposed tool, CRUISE (Crowdsourcing for Requirements Engineering). CRUISE is aimed at involving interested users for gathering, analysing, validating, prioritizing, and negotiating requirements. In this paper, we present our vision and future roadmap for our proposed tool, CRUISE. We also report our observations from preliminary investigation experimental study to check the feasibility and viability of crowdsourcing based tool for Requirements Elicitation activity.
众包已经引起了需求工程(RE)研究社区的极大兴趣。可再生能源活动本质上是复杂的——既耗费精力又耗费时间,而且相互依赖。大众外包在解决复杂任务方面的潜力已得到普遍认可。我们打算通过我们提出的工具CRUISE(需求工程众包)来研究广泛的可再生能源活动的众包潜力,即从收集需求到验证需求。CRUISE旨在让感兴趣的用户参与收集、分析、验证、确定优先级和协商需求。在本文中,我们提出了我们提出的工具CRUISE的愿景和未来路线图。我们还报告了我们从初步调查实验研究中观察到的结果,以检查基于众包的需求激发活动工具的可行性和可行性。
{"title":"CRUISE: A platform for crowdsourcing Requirements Elicitation and evolution","authors":"Richa Sharma, A. Sureka","doi":"10.1109/IC3.2017.8284308","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284308","url":null,"abstract":"Crowdsourcing has aroused a lot of interest in Requirements Engineering (RE) research community. RE activities are inherently complex in nature — both effort and time intensive, and quite dependent on each-other. The potential of crowdsourcing has been acknowledged for addressing complex tasks in general. We intend to study the potential of crowdsourcing for a broad spectrum of RE activities, i.e. from gathering requirements to their validation, through our proposed tool, CRUISE (Crowdsourcing for Requirements Engineering). CRUISE is aimed at involving interested users for gathering, analysing, validating, prioritizing, and negotiating requirements. In this paper, we present our vision and future roadmap for our proposed tool, CRUISE. We also report our observations from preliminary investigation experimental study to check the feasibility and viability of crowdsourcing based tool for Requirements Elicitation activity.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"18 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121030582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Data clustering using enhanced biogeography-based optimization 基于增强生物地理学优化的数据聚类
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284305
Raju Pal, M. Saraswat
Data clustering is one of the important tool in data analysis which partitions the dataset into different groups based on similarity and dissimilarity measures. Clustering is still a NP-hard problem for large dataset due to the presence of irrelevant, overlapping, missing and unknown features which leads to converge it into local optima. Therefore, this paper introduces a novel hybrid meta-heuristic data clustering approach which is based on K-means and biogeography-based optimization (BBO). The proposed method uses K-means to initialize the population of BBO. The simulation has been done on eleven dataset. Experimental and statistical results validate that proposed method outperforms the existing methods.
数据聚类是数据分析的重要工具之一,它基于相似性和不相似性度量将数据集划分为不同的组。对于大型数据集,聚类仍然是一个np难题,因为存在不相关、重叠、缺失和未知的特征,导致其收敛到局部最优。为此,本文提出了一种基于k均值和基于生物地理的优化(BBO)的混合元启发式数据聚类方法。该方法采用K-means对BBO种群进行初始化。在11个数据集上进行了仿真。实验和统计结果验证了该方法优于现有方法。
{"title":"Data clustering using enhanced biogeography-based optimization","authors":"Raju Pal, M. Saraswat","doi":"10.1109/IC3.2017.8284305","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284305","url":null,"abstract":"Data clustering is one of the important tool in data analysis which partitions the dataset into different groups based on similarity and dissimilarity measures. Clustering is still a NP-hard problem for large dataset due to the presence of irrelevant, overlapping, missing and unknown features which leads to converge it into local optima. Therefore, this paper introduces a novel hybrid meta-heuristic data clustering approach which is based on K-means and biogeography-based optimization (BBO). The proposed method uses K-means to initialize the population of BBO. The simulation has been done on eleven dataset. Experimental and statistical results validate that proposed method outperforms the existing methods.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128063251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
No-escape search: Design and implementation ofcloud based directory content search 无逃逸搜索:基于云的目录内容搜索的设计和实现
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284288
Harshit Gujral, Abhinav Sharma, S. Mittal
Searching in-file content is a crucial task in everyday computing, made particularly difficult due to lack of efficient in-file content search system in windows operating system. The goal of this paper is to present a cloud-based exhaustive in-file content search algorithm using three dimensional hash data structure. To facilitate an abundant size of hash and to save user's memory space we have used cloud for accommodating hash structure. We aim at presenting the user with required retrieval in the time complexity of O (constant), usually within 3–8 seconds. Our retrieval is a comprehensible combination of user's input string, the filename(s) containing it. Also, unlike windows operating system's search system, it permits to include location(s) and even multiple occurrences of user defined string inside a file.
文件内容搜索是日常计算中的一项重要任务,由于windows操作系统缺乏高效的文件内容搜索系统,使得搜索工作变得尤为困难。本文的目标是提出一种基于云的详尽的文件内容搜索算法,该算法使用三维哈希数据结构。为了方便哈希的丰富大小和节省用户的内存空间,我们使用云来容纳哈希结构。我们的目标是在0(常数)的时间复杂度内,通常在3-8秒内,为用户提供所需的检索。我们的检索是用户输入字符串和包含该字符串的文件名的可理解组合。此外,与windows操作系统的搜索系统不同,它允许在文件中包含位置,甚至多次出现用户定义的字符串。
{"title":"No-escape search: Design and implementation ofcloud based directory content search","authors":"Harshit Gujral, Abhinav Sharma, S. Mittal","doi":"10.1109/IC3.2017.8284288","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284288","url":null,"abstract":"Searching in-file content is a crucial task in everyday computing, made particularly difficult due to lack of efficient in-file content search system in windows operating system. The goal of this paper is to present a cloud-based exhaustive in-file content search algorithm using three dimensional hash data structure. To facilitate an abundant size of hash and to save user's memory space we have used cloud for accommodating hash structure. We aim at presenting the user with required retrieval in the time complexity of O (constant), usually within 3–8 seconds. Our retrieval is a comprehensible combination of user's input string, the filename(s) containing it. Also, unlike windows operating system's search system, it permits to include location(s) and even multiple occurrences of user defined string inside a file.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130521367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Sarcasm detection of tweets: A comparative study 推文的讽刺检测:一个比较研究
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284317
Tanya Jain, Nilesh Agrawal, Garima Goyal, Niyati Aggrawal
Sarcasm is a nuanced form of communication where the individual states opposite of what is implied. One of the major challenges of sarcasm detection is its ambiguous nature. There is no prescribed definition of sarcasm. Another major challenge is the growing size of the languages. Every day hundreds of new slang words are being created and used on these sites. Hence, the existing corpus of positive and negative sentiments may not prove to be accurate in detecting sarcasm. Also, the recent developments in online social networks allow its users to use varied kind of emoticons with the text. These emoticons may change the polarity of the text and make it sarcastic. Due to these difficulties and the inherently tricky nature of sarcasm it is generally ignored during social network analysis. As a result the results of such analysis are affected adversely. Thus, sarcasm detection poses to be one of the most critical problems which we need to overcome. Detection of sarcastic content is vital to various NLP based systems such as text summarization and sentiment analysis. In this paper we address the problem of sarcasm detection by leveraging the most common expression of sarcasm — “positive sentiment attached to a negative situation”. Our work uses two ensemble based approaches — voted ensemble classifier and random forest classifier. Unlike current approaches to sarcasm detection which rely on existing corpus of positive and negative sentiments for training the classifiers, we use a seeding algorithm to generate training corpus. The proposed model also uses a pragmatic classifier to detect emoticon based sarcasm.
讽刺是一种微妙的交流方式,个人表达与所暗示的相反的东西。讽刺检测的主要挑战之一是它的模糊性。讽刺没有固定的定义。另一个主要挑战是不断增长的语言规模。每天都有数百个新的俚语在这些网站上被创造和使用。因此,现有的积极和消极情绪语料库可能无法准确地检测讽刺。此外,在线社交网络的最新发展允许其用户在文本中使用各种表情符号。这些表情符号可能会改变文本的极性,使其具有讽刺意味。由于这些困难和讽刺固有的棘手性质,它通常在社会网络分析中被忽略。因此,这种分析的结果受到不利影响。因此,讽刺检测是我们需要克服的最关键的问题之一。讽刺内容的检测对于文本摘要和情感分析等基于NLP的系统至关重要。在本文中,我们通过利用讽刺最常见的表达——“积极的情绪附着在消极的情况下”来解决讽刺检测的问题。我们的工作使用了两种基于集成的方法——投票集成分类器和随机森林分类器。与现有的讽刺检测方法依赖于现有的积极和消极情绪语料库来训练分类器不同,我们使用种子算法来生成训练语料库。该模型还使用了一个语用分类器来检测基于表情符号的讽刺。
{"title":"Sarcasm detection of tweets: A comparative study","authors":"Tanya Jain, Nilesh Agrawal, Garima Goyal, Niyati Aggrawal","doi":"10.1109/IC3.2017.8284317","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284317","url":null,"abstract":"Sarcasm is a nuanced form of communication where the individual states opposite of what is implied. One of the major challenges of sarcasm detection is its ambiguous nature. There is no prescribed definition of sarcasm. Another major challenge is the growing size of the languages. Every day hundreds of new slang words are being created and used on these sites. Hence, the existing corpus of positive and negative sentiments may not prove to be accurate in detecting sarcasm. Also, the recent developments in online social networks allow its users to use varied kind of emoticons with the text. These emoticons may change the polarity of the text and make it sarcastic. Due to these difficulties and the inherently tricky nature of sarcasm it is generally ignored during social network analysis. As a result the results of such analysis are affected adversely. Thus, sarcasm detection poses to be one of the most critical problems which we need to overcome. Detection of sarcastic content is vital to various NLP based systems such as text summarization and sentiment analysis. In this paper we address the problem of sarcasm detection by leveraging the most common expression of sarcasm — “positive sentiment attached to a negative situation”. Our work uses two ensemble based approaches — voted ensemble classifier and random forest classifier. Unlike current approaches to sarcasm detection which rely on existing corpus of positive and negative sentiments for training the classifiers, we use a seeding algorithm to generate training corpus. The proposed model also uses a pragmatic classifier to detect emoticon based sarcasm.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114415517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
期刊
2017 Tenth International Conference on Contemporary Computing (IC3)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1