首页 > 最新文献

2017 Tenth International Conference on Contemporary Computing (IC3)最新文献

英文 中文
Fault tolerant streaming of live news using multi-node Cassandra 使用多节点Cassandra的容错流直播新闻
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284310
Shubham Dhingra, Shreeya Sharma, Parmeet Kaur, Chetna Dabas
The paper presents an Android application for streaming live news data using Multi-node Cassandra as the underlying data store. This application is designed for the modern era's users who constantly wish to keep themselves abreast of the latest or trending news with the help of online media. The application provides access to abstracted day-to-day trending news in various categories, for instance national, technical, sports, science etc. in a single swipe. Our application uses Apache Cassandra, a distributed NoSQL database management system, as a data store for distribution of news related data across 4 nodes. Cassandra has been used due to its capability to manage and manipulate large amounts of data across various commodity servers along with features of high availability and fault tolerance. For testing the distribution and replication of data across the nodes, one or more nodes were failed in the experiments. The results show successful retrieval of data despite node failure.
本文提出了一种基于多节点Cassandra作为底层数据存储的Android实时新闻数据流应用程序。这个应用程序是专为现代的用户谁不断希望保持自己跟上最新或趋势新闻与在线媒体的帮助。该应用程序提供访问抽象的日常趋势新闻在各种类别,例如国家,技术,体育,科学等在一个单一的滑动。我们的应用程序使用Apache Cassandra,一个分布式NoSQL数据库管理系统,作为一个数据存储,用于跨4个节点分发新闻相关数据。Cassandra之所以被广泛使用,是因为它能够跨各种商用服务器管理和操作大量数据,并具有高可用性和容错特性。为了测试节点间数据的分布和复制,在实验中有一个或多个节点失败。结果表明,尽管节点出现故障,但仍能成功检索数据。
{"title":"Fault tolerant streaming of live news using multi-node Cassandra","authors":"Shubham Dhingra, Shreeya Sharma, Parmeet Kaur, Chetna Dabas","doi":"10.1109/IC3.2017.8284310","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284310","url":null,"abstract":"The paper presents an Android application for streaming live news data using Multi-node Cassandra as the underlying data store. This application is designed for the modern era's users who constantly wish to keep themselves abreast of the latest or trending news with the help of online media. The application provides access to abstracted day-to-day trending news in various categories, for instance national, technical, sports, science etc. in a single swipe. Our application uses Apache Cassandra, a distributed NoSQL database management system, as a data store for distribution of news related data across 4 nodes. Cassandra has been used due to its capability to manage and manipulate large amounts of data across various commodity servers along with features of high availability and fault tolerance. For testing the distribution and replication of data across the nodes, one or more nodes were failed in the experiments. The results show successful retrieval of data despite node failure.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115063695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Hybrid particle swarm training for convolution neural network (CNN) 卷积神经网络(CNN)的混合粒子群训练
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284356
Yoshika Chhabra, Sanchit Varshney, Ankita
Convolutional Neural Networks(CNN) are one of the most used neural networks in the present time. Its applications are extremely varied. Most recently they have been proving helpful with deep learning, as well. Since it is growing in more convoluted domains, its training complexity is also increasing. To tackle this problem, many hybrid algorithms have been implemented. In this paper, Particle Swarm Optimization (PSO) is used to reduce the overall complexity of the algorithm. The hybrid of PSO used with CNN decreases the required number of epochs for training and the dependency on GPU system. The algorithm so designed is capable of achieving 3–4% increase in accuracy with lesser number of epochs. The advantage of which is decreased hardware requirements for training of CNNs. The hybrid training algorithm is also capable of overcoming the local minima problem of the regular backpropagation training methodology.
卷积神经网络(CNN)是目前应用最广泛的神经网络之一。它的应用非常广泛。最近,它们也被证明对深度学习很有帮助。由于它在更复杂的领域中增长,它的训练复杂性也在增加。为了解决这个问题,已经实现了许多混合算法。本文采用粒子群算法(Particle Swarm Optimization, PSO)来降低算法的整体复杂度。PSO与CNN的混合使用减少了训练所需的epoch数和对GPU系统的依赖。所设计的算法能够在较少的epoch数下实现3-4%的精度提高。其优点是减少了cnn训练对硬件的要求。混合训练算法还能克服常规反向传播训练方法的局部极小问题。
{"title":"Hybrid particle swarm training for convolution neural network (CNN)","authors":"Yoshika Chhabra, Sanchit Varshney, Ankita","doi":"10.1109/IC3.2017.8284356","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284356","url":null,"abstract":"Convolutional Neural Networks(CNN) are one of the most used neural networks in the present time. Its applications are extremely varied. Most recently they have been proving helpful with deep learning, as well. Since it is growing in more convoluted domains, its training complexity is also increasing. To tackle this problem, many hybrid algorithms have been implemented. In this paper, Particle Swarm Optimization (PSO) is used to reduce the overall complexity of the algorithm. The hybrid of PSO used with CNN decreases the required number of epochs for training and the dependency on GPU system. The algorithm so designed is capable of achieving 3–4% increase in accuracy with lesser number of epochs. The advantage of which is decreased hardware requirements for training of CNNs. The hybrid training algorithm is also capable of overcoming the local minima problem of the regular backpropagation training methodology.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124963255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A ranking based recommender system for cold start & data sparsity problem 基于排序的冷启动推荐系统及数据稀疏性问题
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284347
Anshu Sang, S. Vishwakarma
Recommender Systems have been very common and useful nowadays, for predictions of different items which facilitate user by giving suitable recommendations. It deals with the specific type of items and technique used to generate the recommendations that are customized to provide valuable and effective suggestions to the end user. The present system considers two well-known problems during recommendation such as cold start and data sparsity and resolved these problems to the great extend with high accuracy. The proposed system provides the recommendation to new user, with high reliability and accuracy values as shown in our result.
推荐系统现在已经非常普遍和有用,用于预测不同的项目,从而方便用户给出合适的推荐。它处理特定类型的项目和用于生成定制的建议的技术,以向最终用户提供有价值和有效的建议。该系统考虑了推荐过程中常见的冷启动和数据稀疏性两个问题,在很大程度上解决了这两个问题,并具有较高的准确率。该系统为新用户提供推荐服务,具有较高的可靠性和准确性。
{"title":"A ranking based recommender system for cold start & data sparsity problem","authors":"Anshu Sang, S. Vishwakarma","doi":"10.1109/IC3.2017.8284347","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284347","url":null,"abstract":"Recommender Systems have been very common and useful nowadays, for predictions of different items which facilitate user by giving suitable recommendations. It deals with the specific type of items and technique used to generate the recommendations that are customized to provide valuable and effective suggestions to the end user. The present system considers two well-known problems during recommendation such as cold start and data sparsity and resolved these problems to the great extend with high accuracy. The proposed system provides the recommendation to new user, with high reliability and accuracy values as shown in our result.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129924508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
The low-rate denial of service attack based comparative study of active queue management scheme 基于低速率拒绝服务攻击的主动队列管理方案比较研究
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284360
Sanjeev Patel, Abhinav Sharma
A serious threats is observed as the Denial of Service (DoS) attack to the stability of the Internet. In the DoS attack, a large number of systems send malicious packets that are useless to jam a victim which makes it unavailable to access the resources of the networks. The Low-rate DoS (L-DoS) attack is one of the major types of DoS attack that is not easy to detect. The various Active Queue Management (AQM) schemes have been compared to find the decrease in throughput and increase in loss rate, that results due to DoS attack. Various AQM techniques that we try to compare are: Drop-tail, Random Exponential Marking (REM), Random Early Detection (RED), Fair Queuing (FQ), Stochastic Fair Queuing (SFQ), and Proportional Integrator (PI). The throughput, end-to-end delay and loss rate are computed and plotted with respect to number of attackers and burst rate.
拒绝服务(DoS)攻击是对Internet稳定性的严重威胁。在DoS攻击中,大量的系统发送恶意数据包,这些数据包对阻塞受害者毫无用处,使其无法访问网络资源。低速率DoS (Low-rate DoS, L-DoS)攻击是DoS攻击的主要类型之一,不易被检测到。通过对各种活动队列管理(AQM)方案的比较,发现由于DoS攻击导致的吞吐量下降和损失率增加。我们尝试比较的各种AQM技术有:落尾、随机指数标记(REM)、随机早期检测(RED)、公平排队(FQ)、随机公平排队(SFQ)和比例积分器(PI)。计算并绘制了吞吐量、端到端延迟和损失率与攻击者数量和突发率的关系。
{"title":"The low-rate denial of service attack based comparative study of active queue management scheme","authors":"Sanjeev Patel, Abhinav Sharma","doi":"10.1109/IC3.2017.8284360","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284360","url":null,"abstract":"A serious threats is observed as the Denial of Service (DoS) attack to the stability of the Internet. In the DoS attack, a large number of systems send malicious packets that are useless to jam a victim which makes it unavailable to access the resources of the networks. The Low-rate DoS (L-DoS) attack is one of the major types of DoS attack that is not easy to detect. The various Active Queue Management (AQM) schemes have been compared to find the decrease in throughput and increase in loss rate, that results due to DoS attack. Various AQM techniques that we try to compare are: Drop-tail, Random Exponential Marking (REM), Random Early Detection (RED), Fair Queuing (FQ), Stochastic Fair Queuing (SFQ), and Proportional Integrator (PI). The throughput, end-to-end delay and loss rate are computed and plotted with respect to number of attackers and burst rate.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129010360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Investigation of IR based topic models on issue tracking systems to infer software-specific semantic related term pairs 问题跟踪系统中基于IR的主题模型研究,以推断软件特定的语义相关术语对
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284329
D. Correa, A. Sureka, Sangeeta Lal
Software maintenance is a core component of any software development life-cycle. Contemporary software systems contain voluminous and complex information stored in software repositories. Software maintenance professionals spend significant amount of time in search and exploration of these repositories for common maintenance tasks like bug fixing, feature enhancements, code refactoring and reengineering. Therefore, tools and methods to facilitate search in software repositories can aid software maintenance professionals to have faster access to required information and increase productivity. A domain-specific lexical resource is an important tool to bridge the semantic gap existing between the information need and search query. In this work, we investigate the use of information retrieval (IR) based topic models (like LSI and LDA) to infer semantically related terms for a software context specific lexical resource. We perform our experiments on Google Chromium — a widely popular open-source browser — issue tracker system which contains 134,000+ bug reports. We divide our study into two parts — (1) In the first part, we apply our IR models on free form natural language textual data present in defect tracking systems. We perform qualitative analysis on the obtained output and uncover semantically related terms in the Google Chromium software context. We observe that we are able to infer semantically similar term pairs in four different contexts of English language, Software, Google Chromium and Code details. (2) In second part of this study, we utilize the semantically inferred terms obtained from the output of IR models to facilitate the software maintenance task of duplicate bug report detection. Our results demonstrate that the use of IR based topic models on defect tracking systems to automatically infer semantically related terms can help build a software domain-specific lexical resource and reduce the vocabulary gap.
软件维护是任何软件开发生命周期的核心组成部分。当代软件系统包含存储在软件存储库中的大量复杂信息。软件维护专业人员花费大量时间搜索和探索这些存储库,以完成常见的维护任务,如bug修复、功能增强、代码重构和再工程。因此,促进在软件存储库中搜索的工具和方法可以帮助软件维护专业人员更快地访问所需的信息并提高工作效率。特定于领域的词汇资源是弥合信息需求和搜索查询之间存在的语义差距的重要工具。在这项工作中,我们研究了使用基于信息检索(IR)的主题模型(如LSI和LDA)来推断特定于软件上下文的词汇资源的语义相关术语。我们在Google Chromium(一个广受欢迎的开源浏览器)上进行实验,这个问题跟踪系统包含134,000多个错误报告。我们将研究分为两部分——(1)在第一部分中,我们将IR模型应用于缺陷跟踪系统中存在的自由形式自然语言文本数据。我们对获得的输出执行定性分析,并在Google Chromium软件上下文中发现语义相关的术语。我们观察到,我们能够在英语语言、软件、Google Chromium和代码细节四种不同的上下文中推断出语义上相似的术语对。(2)在本研究的第二部分,我们利用从IR模型的输出中获得的语义推断项来促进重复错误报告检测的软件维护任务。我们的结果表明,在缺陷跟踪系统上使用基于IR的主题模型来自动推断语义相关的术语可以帮助构建特定于软件领域的词汇资源并减少词汇缺口。
{"title":"Investigation of IR based topic models on issue tracking systems to infer software-specific semantic related term pairs","authors":"D. Correa, A. Sureka, Sangeeta Lal","doi":"10.1109/IC3.2017.8284329","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284329","url":null,"abstract":"Software maintenance is a core component of any software development life-cycle. Contemporary software systems contain voluminous and complex information stored in software repositories. Software maintenance professionals spend significant amount of time in search and exploration of these repositories for common maintenance tasks like bug fixing, feature enhancements, code refactoring and reengineering. Therefore, tools and methods to facilitate search in software repositories can aid software maintenance professionals to have faster access to required information and increase productivity. A domain-specific lexical resource is an important tool to bridge the semantic gap existing between the information need and search query. In this work, we investigate the use of information retrieval (IR) based topic models (like LSI and LDA) to infer semantically related terms for a software context specific lexical resource. We perform our experiments on Google Chromium — a widely popular open-source browser — issue tracker system which contains 134,000+ bug reports. We divide our study into two parts — (1) In the first part, we apply our IR models on free form natural language textual data present in defect tracking systems. We perform qualitative analysis on the obtained output and uncover semantically related terms in the Google Chromium software context. We observe that we are able to infer semantically similar term pairs in four different contexts of English language, Software, Google Chromium and Code details. (2) In second part of this study, we utilize the semantically inferred terms obtained from the output of IR models to facilitate the software maintenance task of duplicate bug report detection. Our results demonstrate that the use of IR based topic models on defect tracking systems to automatically infer semantically related terms can help build a software domain-specific lexical resource and reduce the vocabulary gap.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121879548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing fault prediction usefulness from cost perspective using source code metrics 使用源代码度量从成本角度分析故障预测的有效性
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284297
L. Kumar, A. Sureka
Software fault prediction techniques are useful for the purpose of optimizing test resource allocation. Software fault prediction based on source code metrics and machine learning models consists of using static program features as input predictors to estimate the fault proneness of a class or module. We conduct a comparison of five machine learning algorithms on their fault prediction performance based on experiments on 56 open source projects. Several researchers have argued on the application of software engineering economics and testing cost for the purpose of evaluating a software quality assurance activity. We evaluate the performance and usefulness of fault prediction models within the context of a cost evaluation framework and present the results of our experiments. We propose a novel approach using decision trees to predict the usefulness of fault prediction based on distributional characteristics of source code metrics by fusing information from the output of the fault prediction usefulness using cost evaluation framework and distributional source code metrics.
软件故障预测技术有助于优化测试资源的分配。基于源代码度量和机器学习模型的软件故障预测包括使用静态程序特征作为输入预测器来估计类或模块的故障倾向。基于56个开源项目的实验,我们对5种机器学习算法的故障预测性能进行了比较。为了评估软件质量保证活动,一些研究人员争论了软件工程经济学和测试成本的应用。我们在成本评估框架的背景下评估故障预测模型的性能和有用性,并给出我们的实验结果。本文提出了一种基于源代码度量分布特征的决策树故障预测有用性预测方法,该方法采用成本评估框架和分布式源代码度量融合故障预测有用性输出信息。
{"title":"Analyzing fault prediction usefulness from cost perspective using source code metrics","authors":"L. Kumar, A. Sureka","doi":"10.1109/IC3.2017.8284297","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284297","url":null,"abstract":"Software fault prediction techniques are useful for the purpose of optimizing test resource allocation. Software fault prediction based on source code metrics and machine learning models consists of using static program features as input predictors to estimate the fault proneness of a class or module. We conduct a comparison of five machine learning algorithms on their fault prediction performance based on experiments on 56 open source projects. Several researchers have argued on the application of software engineering economics and testing cost for the purpose of evaluating a software quality assurance activity. We evaluate the performance and usefulness of fault prediction models within the context of a cost evaluation framework and present the results of our experiments. We propose a novel approach using decision trees to predict the usefulness of fault prediction based on distributional characteristics of source code metrics by fusing information from the output of the fault prediction usefulness using cost evaluation framework and distributional source code metrics.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133928885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Keystroke dynamics based authentication system with unrestricted data collection 基于击键动力学的不受限制数据采集的认证系统
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284312
Shashank Gupta, Kavita Pandey, Jatin Yadav, Richa Sharma
In today's world, superior authentication mechanisms have gained utmost importance to deal with increased synthetic forgeries. One such authentication mechanism uses keystroke dynamics to uniquely label users on the basis of their typing pattern. This paper proposes a statistical approach to implement such a keystroke dynamics based authentication system. The primary focus in this paper while building an authentication system is on providing an unrestricted environment for data collection. However, removing restrictions makes the data unfit for direct feature extraction, and therefore, it requires preprocessing. The majority of this paper presents techniques for improved data preprocessing with its principal element being the removal of outlier values from the dataset. It presents typing trajectories of a particular user before and after removing outliers from the dataset, to show that the similarities in typing pattern become even more prominent after preprocessing. After using the proposed method for removing outliers, the authentication accuracy of the system is increased many folds.
在当今世界,高级身份验证机制对于处理日益增加的合成伪造已变得至关重要。其中一种身份验证机制使用击键动力学来根据用户的输入模式唯一地标记用户。本文提出了一种统计方法来实现这样一个基于击键动力学的认证系统。本文在构建身份验证系统时,主要关注的是为数据收集提供不受限制的环境。然而,去除限制使得数据不适合直接提取特征,因此需要对其进行预处理。本文主要介绍了改进数据预处理的技术,其主要元素是从数据集中去除离群值。它给出了特定用户在从数据集中去除离群值之前和之后的打字轨迹,以表明在预处理之后,打字模式的相似性变得更加突出。采用该方法去除异常值后,系统的认证精度提高了数倍。
{"title":"Keystroke dynamics based authentication system with unrestricted data collection","authors":"Shashank Gupta, Kavita Pandey, Jatin Yadav, Richa Sharma","doi":"10.1109/IC3.2017.8284312","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284312","url":null,"abstract":"In today's world, superior authentication mechanisms have gained utmost importance to deal with increased synthetic forgeries. One such authentication mechanism uses keystroke dynamics to uniquely label users on the basis of their typing pattern. This paper proposes a statistical approach to implement such a keystroke dynamics based authentication system. The primary focus in this paper while building an authentication system is on providing an unrestricted environment for data collection. However, removing restrictions makes the data unfit for direct feature extraction, and therefore, it requires preprocessing. The majority of this paper presents techniques for improved data preprocessing with its principal element being the removal of outlier values from the dataset. It presents typing trajectories of a particular user before and after removing outliers from the dataset, to show that the similarities in typing pattern become even more prominent after preprocessing. After using the proposed method for removing outliers, the authentication accuracy of the system is increased many folds.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115800916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recognition of epilepsy from non-seizure electroencephalogram using combination of linear SVM and time domain attributes 线性支持向量机与时域属性相结合的非发作性脑电图癫痫识别
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284306
Debanshu Bhowmick, Atrija Singh, S. Sanyal
Classification of neural disorders, like epilepsy, can be performed efficiently using soft computing methods. Previously many methods of detecting epilepsy using various time and time-frequency domain features have been proposed. Our study proposes a unique feature set, comprising of time domain features, Waveform Length, Root Mean Square, Mean Absolute Value and Zero Crossing, combining them with Linear Support Vector Machine to classify a set of EEG signals into epileptic and non-epileptic under non-seizure condition. Our proposed classification approach yields an accuracy of 95%.
神经疾病的分类,如癫痫,可以有效地执行使用软计算方法。以前已经提出了许多利用各种时间和时频域特征检测癫痫的方法。我们的研究提出了一个独特的特征集,包括时域特征、波形长度、均方根、均值绝对值和过零,并结合线性支持向量机将一组非癫痫状态下的脑电图信号分类为癫痫和非癫痫。我们提出的分类方法的准确率为95%。
{"title":"Recognition of epilepsy from non-seizure electroencephalogram using combination of linear SVM and time domain attributes","authors":"Debanshu Bhowmick, Atrija Singh, S. Sanyal","doi":"10.1109/IC3.2017.8284306","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284306","url":null,"abstract":"Classification of neural disorders, like epilepsy, can be performed efficiently using soft computing methods. Previously many methods of detecting epilepsy using various time and time-frequency domain features have been proposed. Our study proposes a unique feature set, comprising of time domain features, Waveform Length, Root Mean Square, Mean Absolute Value and Zero Crossing, combining them with Linear Support Vector Machine to classify a set of EEG signals into epileptic and non-epileptic under non-seizure condition. Our proposed classification approach yields an accuracy of 95%.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115318402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Performance evaluation of various data structures in building efficient indexing schemes for XML documents 为XML文档构建高效索引方案时各种数据结构的性能评估
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284351
G. Dhanalekshmi, Krishna Asawa
With the increasing volume of XML documents on the Web, indexing, storing and retrieving these documents is of a great concern. Indexing allows efficient access to XML document parts. There are several methods proposed for indexing XML documents. Generally, they are classified by the database community and information retrieval community. This paper presents a comparative study of different data structures to create an efficient indexing of XML documents. The experiments were conducted to evaluate the performance of the various index structures in terms of the index construction time and the index storage space.
随着Web上XML文档数量的增加,对这些文档进行索引、存储和检索是一个非常重要的问题。索引允许对XML文档部分进行有效访问。提出了几种索引XML文档的方法。一般分为数据库社区和信息检索社区。本文对不同的数据结构进行了比较研究,以创建有效的XML文档索引。实验从索引构建时间和索引存储空间两个方面对不同索引结构的性能进行了评价。
{"title":"Performance evaluation of various data structures in building efficient indexing schemes for XML documents","authors":"G. Dhanalekshmi, Krishna Asawa","doi":"10.1109/IC3.2017.8284351","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284351","url":null,"abstract":"With the increasing volume of XML documents on the Web, indexing, storing and retrieving these documents is of a great concern. Indexing allows efficient access to XML document parts. There are several methods proposed for indexing XML documents. Generally, they are classified by the database community and information retrieval community. This paper presents a comparative study of different data structures to create an efficient indexing of XML documents. The experiments were conducted to evaluate the performance of the various index structures in terms of the index construction time and the index storage space.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"414 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125066383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Secure live virtual machine migration through runtime monitors 通过运行时监视器安全实时虚拟机迁移
Pub Date : 2017-08-01 DOI: 10.1109/IC3.2017.8284316
Ahmed M. Mahfouz, Md Lutfar Rahman, S. Shiva
In this paper, we propose a new model for live migration of virtual machines (VMs) in a secure environment. The live migration of a VM is the process of moving the VM from one physical host to another without interrupting any of the VM running services. We review the stages of the live migration process and identify the threats to it. Also, we propose our migration model that fulfills the most security requirements for secure live VM migration process.
在本文中,我们提出了一个新的虚拟机(vm)在安全环境中的实时迁移模型。热迁移是指在不中断虚拟机运行业务的情况下,将虚拟机从一台物理主机迁移到另一台物理主机的过程。我们回顾了实时迁移过程的各个阶段,并确定了对它的威胁。此外,我们还提出了我们的迁移模型,该模型可以满足安全活动VM迁移过程的大多数安全性要求。
{"title":"Secure live virtual machine migration through runtime monitors","authors":"Ahmed M. Mahfouz, Md Lutfar Rahman, S. Shiva","doi":"10.1109/IC3.2017.8284316","DOIUrl":"https://doi.org/10.1109/IC3.2017.8284316","url":null,"abstract":"In this paper, we propose a new model for live migration of virtual machines (VMs) in a secure environment. The live migration of a VM is the process of moving the VM from one physical host to another without interrupting any of the VM running services. We review the stages of the live migration process and identify the threats to it. Also, we propose our migration model that fulfills the most security requirements for secure live VM migration process.","PeriodicalId":147099,"journal":{"name":"2017 Tenth International Conference on Contemporary Computing (IC3)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129967344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2017 Tenth International Conference on Contemporary Computing (IC3)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1