首页 > 最新文献

2014 International Conference on Contemporary Computing and Informatics (IC3I)最新文献

英文 中文
A generic approach for runtime object creation and visualization 用于运行时对象创建和可视化的通用方法
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019576
Sanath S. Shenoy, C. Vijeth
Currently Applications are being built using object oriented programming. A number of methods are used to retrieve data from objects. Technologies have also improved such that objects can be composed at runtime and used for further processing. In modern applications developed using object oriented languages such as Java, C++, C# etc, object composition and decomposition is done very frequently. This is because objects are created at runtime depending on configurable parameters or inputs from users. Most popular examples of object composition include class factories which use configuration to create different kinds of objects at runtime. A more complex example could include Generation of Mock objects using runtime object creation. The Mock objects are frequently used in Testing frameworks, however real objects are also created at runtime depending on the requirements of the application. In this paper we try to use a tree traversal algorithm and Runtime object creation technique to visualize them conveniently.
目前,应用程序正在使用面向对象编程构建。有许多方法用于从对象中检索数据。技术也得到了改进,可以在运行时组合对象并用于进一步处理。在使用面向对象语言(如Java、c++、c#等)开发的现代应用程序中,对象组合和分解非常频繁。这是因为对象是在运行时根据可配置参数或用户输入创建的。对象组合最流行的例子包括类工厂,它使用配置在运行时创建不同类型的对象。一个更复杂的例子可能包括使用运行时对象创建生成Mock对象。Mock对象经常在测试框架中使用,但是实际对象也会根据应用程序的需求在运行时创建。在本文中,我们尝试使用树遍历算法和运行时对象创建技术来方便地可视化它们。
{"title":"A generic approach for runtime object creation and visualization","authors":"Sanath S. Shenoy, C. Vijeth","doi":"10.1109/IC3I.2014.7019576","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019576","url":null,"abstract":"Currently Applications are being built using object oriented programming. A number of methods are used to retrieve data from objects. Technologies have also improved such that objects can be composed at runtime and used for further processing. In modern applications developed using object oriented languages such as Java, C++, C# etc, object composition and decomposition is done very frequently. This is because objects are created at runtime depending on configurable parameters or inputs from users. Most popular examples of object composition include class factories which use configuration to create different kinds of objects at runtime. A more complex example could include Generation of Mock objects using runtime object creation. The Mock objects are frequently used in Testing frameworks, however real objects are also created at runtime depending on the requirements of the application. In this paper we try to use a tree traversal algorithm and Runtime object creation technique to visualize them conveniently.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132260598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retrieval of images using data mining techniques 使用数据挖掘技术检索图像
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019795
C. Joseph, Aswathy Wilson
Data mining is an emerging research area, because of the generation of large volume of data. The image mining is new branch of data mining, which deals with the analysis of image data. There is several methods for retrieving images from a large dataset. But they have some drawbacks. In this paper using image mining techniques like clustering and associations rules mining for mine the data from image. And also it uses the fusion of multimodal features like visual and textual. This system produces a better precise and recalls values.
数据挖掘是一个新兴的研究领域,因为它产生了大量的数据。图像挖掘是数据挖掘的一个新分支,主要研究图像数据的分析。从大型数据集中检索图像有几种方法。但它们也有一些缺点。本文利用聚类和关联规则挖掘等图像挖掘技术从图像中挖掘数据。它还融合了视觉和文本等多模态特征。该系统产生了更好的精度和召回值。
{"title":"Retrieval of images using data mining techniques","authors":"C. Joseph, Aswathy Wilson","doi":"10.1109/IC3I.2014.7019795","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019795","url":null,"abstract":"Data mining is an emerging research area, because of the generation of large volume of data. The image mining is new branch of data mining, which deals with the analysis of image data. There is several methods for retrieving images from a large dataset. But they have some drawbacks. In this paper using image mining techniques like clustering and associations rules mining for mine the data from image. And also it uses the fusion of multimodal features like visual and textual. This system produces a better precise and recalls values.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127820249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
An investigation of combining gradient descriptor and diverse classifiers to improve object taxonomy in very large image dataset 结合梯度描述符和不同分类器改进超大图像数据集目标分类的研究
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019774
T.R Anusha, N. Hemavathi, K. Mahantesh, R. Chetana
Assigning a label pertaining to an image belonging to its category is defined as object taxonomy. In this paper, we propose a transform based descriptor which effectively extracts intensity gradients defining edge directions from segmented regions. Feature vectors comprising color, shape and texture information are obtained in compressed and de-correlated space. Firstly, Fuzzy c-means clustering is applied to an image in complex hybrid color space to obtain clusters based on color homogeneity of pixels. Further, HOG is employed on these clusters to extract discriminative features detecting local object appearance which is characterized with fine scale gradients at different orientation bins. To increase numerical stability, the obtained features are mapped onto local dimension feature space using PCA. For subsequent classification, diverse similarity measures and Neural networks are used to obtain an average correctness rate resulting in highly discriminative image classification. We demonstrated our proposed work on Caltech-101 and Caltech-256 datasets and obtained leading classification rates in comparison with several benchmarking techniques explored in literature.
为属于其类别的图像分配标签定义为对象分类法。本文提出了一种基于变换的描述子,该描述子可以有效地从分割的区域中提取定义边缘方向的强度梯度。在压缩和去相关的空间中获得包含颜色、形状和纹理信息的特征向量。首先,将模糊c均值聚类应用于复杂混合色彩空间中的图像,根据像素的颜色均匀性获得聚类;进一步,在这些聚类上使用HOG提取判别特征,检测局部目标外观,并在不同方向箱上具有精细的尺度梯度特征。为了提高数值稳定性,利用主成分分析将得到的特征映射到局部维特征空间。在后续分类中,使用不同的相似度度量和神经网络来获得平均正确率,从而实现高度判别的图像分类。我们在Caltech-101和Caltech-256数据集上展示了我们提出的工作,并与文献中探索的几种基准测试技术相比,获得了领先的分类率。
{"title":"An investigation of combining gradient descriptor and diverse classifiers to improve object taxonomy in very large image dataset","authors":"T.R Anusha, N. Hemavathi, K. Mahantesh, R. Chetana","doi":"10.1109/IC3I.2014.7019774","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019774","url":null,"abstract":"Assigning a label pertaining to an image belonging to its category is defined as object taxonomy. In this paper, we propose a transform based descriptor which effectively extracts intensity gradients defining edge directions from segmented regions. Feature vectors comprising color, shape and texture information are obtained in compressed and de-correlated space. Firstly, Fuzzy c-means clustering is applied to an image in complex hybrid color space to obtain clusters based on color homogeneity of pixels. Further, HOG is employed on these clusters to extract discriminative features detecting local object appearance which is characterized with fine scale gradients at different orientation bins. To increase numerical stability, the obtained features are mapped onto local dimension feature space using PCA. For subsequent classification, diverse similarity measures and Neural networks are used to obtain an average correctness rate resulting in highly discriminative image classification. We demonstrated our proposed work on Caltech-101 and Caltech-256 datasets and obtained leading classification rates in comparison with several benchmarking techniques explored in literature.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126350398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Bottom-up Pittsburgh approach for discovery of classification rules 发现分类规则的自底向上匹兹堡方法
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019579
Priyanka Sharma, S. Ratnoo
This paper presents bottom-up Pittsburgh approach for discovery of classification rules. Population initialization makes use of entropy as the attribute significance measure and contains variable sized organizations. Each organization contains a set of IF-THEN rules. As bottom-up approach is employed, so traditional operators are not feasible and efficient to use. Therefore, four evolutionary operators are devised for realizing the evolutionary operations performed on organizations. Bottom-up Pittsburgh approach gives best set of rule having good accuracy. In experiments, the effectiveness of the proposed algorithm is evaluated by comparing the results of bottom-up Pittsburgh with and without entropy to the top-down Michigan approach with and without entropy on 10 datasets from the UCI and KEEL repository. All results show that bottom-up Pittsburgh approach achieves a higher predictive accuracy and is more consistent.
提出了一种自底向上的匹兹堡分类规则发现方法。种群初始化利用熵作为属性显著性度量,包含可变大小的组织。每个组织都包含一组IF-THEN规则。由于采用自底向上的方法,传统的操作方法不可行,效率也不高。因此,设计了四种演化算子来实现对组织执行的演化操作。自底向上的匹兹堡方法给出了精度较高的最佳规则集。在实验中,通过比较有和没有熵的自下而上的匹兹堡方法与有和没有熵的自上而下的密歇根方法在UCI和KEEL存储库的10个数据集上的结果,评估了所提出算法的有效性。结果表明,自下而上的匹兹堡方法具有更高的预测精度和一致性。
{"title":"Bottom-up Pittsburgh approach for discovery of classification rules","authors":"Priyanka Sharma, S. Ratnoo","doi":"10.1109/IC3I.2014.7019579","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019579","url":null,"abstract":"This paper presents bottom-up Pittsburgh approach for discovery of classification rules. Population initialization makes use of entropy as the attribute significance measure and contains variable sized organizations. Each organization contains a set of IF-THEN rules. As bottom-up approach is employed, so traditional operators are not feasible and efficient to use. Therefore, four evolutionary operators are devised for realizing the evolutionary operations performed on organizations. Bottom-up Pittsburgh approach gives best set of rule having good accuracy. In experiments, the effectiveness of the proposed algorithm is evaluated by comparing the results of bottom-up Pittsburgh with and without entropy to the top-down Michigan approach with and without entropy on 10 datasets from the UCI and KEEL repository. All results show that bottom-up Pittsburgh approach achieves a higher predictive accuracy and is more consistent.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"29 17","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121000014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Performance measurements: Proxy server for various operating systems 性能度量:各种操作系统的代理服务器
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019652
S. Shiwani, Sandeep Kumar, Vishal Chandra, Sunny Bansal
The extensive cruelty of proxies started years back with a plan known as Wingate. Earlier than when Windows had Internet connection sharing built in, populace through a home network required a technique to route all their machinery Internet traffic throughout a sole dialup. Wingate provided this reason, but regrettably it came with an insecure evasion configuration. Fundamentally anybody could join to your Wingate server plus telnet back out to an additional machine on an additional port. The corporation that wrote the software ultimately blocked the hole, but the innovative versions were extensively organized and uncommonly upgraded. Spiraling to the current day, we notice a subsequent development in proxy exercise Web traffic has developed at an extraordinary speed above the past 7 years. Corporations and ISPs often go round to caching proxy servers to decrease the wonderful load on their networks. In categorize to gratify the anxiety of their content-hungry users, these proxy servers are frequently configured to proxy any port, through small observe to security. We applied the proxy server in Linux and Windows in standard servers and Servers in Cloud.
代理人的广泛残酷行为始于几年前的温盖特计划。在Windows内置互联网连接共享功能之前,通过家庭网络的用户需要一种技术来通过一个单独的拨号路由他们所有的机器互联网流量。Wingate提供了这个原因,但遗憾的是,它附带了一个不安全的逃避配置。基本上,任何人都可以加入到您的Wingate服务器,然后通过telnet返回到另一个端口上的另一台机器。编写软件的公司最终堵住了漏洞,但创新的版本被广泛组织起来,并罕见地进行了升级。螺旋式上升到今天,我们注意到代理操作的后续发展,Web流量在过去7年中以非凡的速度发展。公司和互联网服务提供商经常使用缓存代理服务器来减少他们网络上的巨大负载。在分类上,为了满足对内容饥渴的用户的焦虑,这些代理服务器经常被配置为代理任何端口,通过对安全性的小观察。我们在标准服务器和云服务器中应用了Linux和Windows的代理服务器。
{"title":"Performance measurements: Proxy server for various operating systems","authors":"S. Shiwani, Sandeep Kumar, Vishal Chandra, Sunny Bansal","doi":"10.1109/IC3I.2014.7019652","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019652","url":null,"abstract":"The extensive cruelty of proxies started years back with a plan known as Wingate. Earlier than when Windows had Internet connection sharing built in, populace through a home network required a technique to route all their machinery Internet traffic throughout a sole dialup. Wingate provided this reason, but regrettably it came with an insecure evasion configuration. Fundamentally anybody could join to your Wingate server plus telnet back out to an additional machine on an additional port. The corporation that wrote the software ultimately blocked the hole, but the innovative versions were extensively organized and uncommonly upgraded. Spiraling to the current day, we notice a subsequent development in proxy exercise Web traffic has developed at an extraordinary speed above the past 7 years. Corporations and ISPs often go round to caching proxy servers to decrease the wonderful load on their networks. In categorize to gratify the anxiety of their content-hungry users, these proxy servers are frequently configured to proxy any port, through small observe to security. We applied the proxy server in Linux and Windows in standard servers and Servers in Cloud.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124976450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Experiments on information retrieval mechanisms for distributed biodiversity databases environment 分布式生物多样性数据库环境下信息检索机制实验
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019650
Manavalan, S. Chattopadhyay, Mangala, Prahlada Rao B.B., Sarat Chandra Babu, Akhil Kulkarni
This paper mainly brings out the details related to a prototype biodiversity information retrieval system that has been setup using the distributed Grid-Cloud resources of GARUDA Grid project, India. The overall experiment has been done with the help of open source biodiversity databases. The structure of these relational database tables are not standardized and are hosted on a variety of Database Management Systems (DBMS) at different Virtual Machines (VMs) which in general been assumed as geographically distributed. The front end of the end user system is an HTML interface which captures and redirects the user query to the application engine which has been built using python and made functional in master grid node. According to the received input, the python program interprets the data and generates a query in the Structured Query Language (SQL). This generated query is sent to the distributed remote database servers which channelizes to the local DBMS of the cloud's virtual machine and executes the SQL query. The end results are retrieved back by the master grid application engine and been displayed in a new HTML page.
本文主要介绍了利用印度GARUDA网格项目的分布式网格云资源建立的生物多样性信息检索系统原型的相关细节。整个实验是在开源生物多样性数据库的帮助下完成的。这些关系数据库表的结构没有标准化,并且托管在不同虚拟机(vm)上的各种数据库管理系统(DBMS)上,这些虚拟机通常被认为是地理分布的。终端用户系统的前端是一个HTML界面,它捕获用户查询并将其重定向到使用python构建并在主网格节点中实现功能的应用程序引擎。根据接收到的输入,python程序解释数据并使用结构化查询语言(SQL)生成查询。生成的查询被发送到分布式远程数据库服务器,该服务器连接到云虚拟机的本地DBMS并执行SQL查询。最终结果由主网格应用程序引擎检索回来,并显示在一个新的HTML页面中。
{"title":"Experiments on information retrieval mechanisms for distributed biodiversity databases environment","authors":"Manavalan, S. Chattopadhyay, Mangala, Prahlada Rao B.B., Sarat Chandra Babu, Akhil Kulkarni","doi":"10.1109/IC3I.2014.7019650","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019650","url":null,"abstract":"This paper mainly brings out the details related to a prototype biodiversity information retrieval system that has been setup using the distributed Grid-Cloud resources of GARUDA Grid project, India. The overall experiment has been done with the help of open source biodiversity databases. The structure of these relational database tables are not standardized and are hosted on a variety of Database Management Systems (DBMS) at different Virtual Machines (VMs) which in general been assumed as geographically distributed. The front end of the end user system is an HTML interface which captures and redirects the user query to the application engine which has been built using python and made functional in master grid node. According to the received input, the python program interprets the data and generates a query in the Structured Query Language (SQL). This generated query is sent to the distributed remote database servers which channelizes to the local DBMS of the cloud's virtual machine and executes the SQL query. The end results are retrieved back by the master grid application engine and been displayed in a new HTML page.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125216944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Supervised named entity recognition in Assamese language 阿萨姆语中有监督的命名实体识别
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019728
G. Talukdar, Pranjal Protim Borah, Arup Baruah
In each and every natural language nouns play a very important role. A subcategory of noun is proper noun. They represent the names of person, location, organization etc. The task of recognizing the proper nouns in a text and categorizing them into some classes such as person, location, organization and other is called Named Entity Recognition. This is a very essential step of many natural language processing applications that makes the process of information extraction easier. Named Entity Recognition (NER) in most of the Indian languages has been performed using rule-based, supervised and unsupervised approaches. In this work our target language is Assamese, the language spoken by most of the people in North-Eastern part of India and particularly in Assam. In Assamese language, Named Entity Recognition has been performed using the rule based and suffix stripping based approaches. Supervised learning technique is more useful and can be easily adapted to new domains compared to rule based approaches. This paper reports the first work in Assamese NER using a machine learning technique. In this paper Assamese Named Entity Recognition is performed using Naïve Bayes classifier. Since feature extraction plays the most important role in getting better performance in any machine learning technique, in this work our aim is to put forward a description of a few important features related to Assamese NER and performance measure of the system using these features.
在每一种自然语言中,名词都扮演着非常重要的角色。名词的一个子类是专有名词。它们代表个人、地点、组织等的名称。识别文本中的专有名词并将其分类为诸如人、地点、组织等类的任务称为命名实体识别。这是许多自然语言处理应用程序中非常重要的一步,它使信息提取过程变得更加容易。大多数印度语言的命名实体识别(NER)都是使用基于规则的、有监督的和无监督的方法进行的。在这项工作中,我们的目标语言是阿萨姆语,这是印度东北部尤其是阿萨姆邦大多数人使用的语言。在阿萨姆语中,使用基于规则和基于后缀剥离的方法来执行命名实体识别。与基于规则的方法相比,监督学习技术更有用,更容易适应新的领域。本文报告了使用机器学习技术在阿萨姆邦NER中的第一个工作。本文使用Naïve贝叶斯分类器对阿萨姆语命名实体进行识别。由于在任何机器学习技术中,特征提取在获得更好的性能方面发挥着最重要的作用,因此在这项工作中,我们的目标是提出与阿萨姆邦NER相关的几个重要特征的描述,并使用这些特征对系统进行性能度量。
{"title":"Supervised named entity recognition in Assamese language","authors":"G. Talukdar, Pranjal Protim Borah, Arup Baruah","doi":"10.1109/IC3I.2014.7019728","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019728","url":null,"abstract":"In each and every natural language nouns play a very important role. A subcategory of noun is proper noun. They represent the names of person, location, organization etc. The task of recognizing the proper nouns in a text and categorizing them into some classes such as person, location, organization and other is called Named Entity Recognition. This is a very essential step of many natural language processing applications that makes the process of information extraction easier. Named Entity Recognition (NER) in most of the Indian languages has been performed using rule-based, supervised and unsupervised approaches. In this work our target language is Assamese, the language spoken by most of the people in North-Eastern part of India and particularly in Assam. In Assamese language, Named Entity Recognition has been performed using the rule based and suffix stripping based approaches. Supervised learning technique is more useful and can be easily adapted to new domains compared to rule based approaches. This paper reports the first work in Assamese NER using a machine learning technique. In this paper Assamese Named Entity Recognition is performed using Naïve Bayes classifier. Since feature extraction plays the most important role in getting better performance in any machine learning technique, in this work our aim is to put forward a description of a few important features related to Assamese NER and performance measure of the system using these features.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121779530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Intrusion detection model using fusion of PCA and optimized SVM 基于主成分分析和优化支持向量机的入侵检测模型
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019692
I. Thaseen, C. Kumar
Intrusion detection systems (IDS) play a major role in detecting the attacks that occur in the computer or networks. Anomaly intrusion detection models detect new attacks by observing the deviation from profile. However there are many problems in the traditional IDS such as high false alarm rate, low detection capability against new network attacks and insufficient analysis capacity. The use of machine learning for intrusion models automatically increases the performance with an improved experience. This paper proposes a novel method of integrating principal component analysis (PCA) and support vector machine (SVM) by optimizing the kernel parameters using automatic parameter selection technique. This technique reduces the training and testing time to identify intrusions thereby improving the accuracy. The proposed method was tested on KDD data set. The datasets were carefully divided into training and testing considering the minority attacks such as U2R and R2L to be present in the testing set to identify the occurrence of unknown attack. The results indicate that the proposed method is successful in identifying intrusions. The experimental results show that the classification accuracy of the proposed method outperforms other classification techniques using SVM as the classifier and other dimensionality reduction or feature selection techniques. Minimum resources are consumed as the classifier input requires reduced feature set and thereby minimizing training and testing overhead time.
入侵检测系统(IDS)在检测发生在计算机或网络中的攻击方面起着重要作用。异常入侵检测模型通过观察与配置文件的偏差来检测新的攻击。但是传统的入侵检测存在虚警率高、对新型网络攻击的检测能力低、分析能力不足等问题。在入侵模型中使用机器学习可以自动提高性能,并改善体验。提出了一种利用自动参数选择技术对核参数进行优化的主成分分析(PCA)和支持向量机(SVM)相结合的新方法。该技术减少了识别入侵的训练和测试时间,从而提高了准确性。在KDD数据集上对该方法进行了测试。考虑到测试集中存在U2R和R2L等少数攻击,将数据集仔细划分为训练和测试,以识别未知攻击的发生。结果表明,该方法能够有效地识别入侵。实验结果表明,该方法的分类精度优于其他以支持向量机为分类器的分类技术和其他降维或特征选择技术。由于分类器输入需要减少特征集,因此消耗的资源最少,从而最大限度地减少了训练和测试开销时间。
{"title":"Intrusion detection model using fusion of PCA and optimized SVM","authors":"I. Thaseen, C. Kumar","doi":"10.1109/IC3I.2014.7019692","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019692","url":null,"abstract":"Intrusion detection systems (IDS) play a major role in detecting the attacks that occur in the computer or networks. Anomaly intrusion detection models detect new attacks by observing the deviation from profile. However there are many problems in the traditional IDS such as high false alarm rate, low detection capability against new network attacks and insufficient analysis capacity. The use of machine learning for intrusion models automatically increases the performance with an improved experience. This paper proposes a novel method of integrating principal component analysis (PCA) and support vector machine (SVM) by optimizing the kernel parameters using automatic parameter selection technique. This technique reduces the training and testing time to identify intrusions thereby improving the accuracy. The proposed method was tested on KDD data set. The datasets were carefully divided into training and testing considering the minority attacks such as U2R and R2L to be present in the testing set to identify the occurrence of unknown attack. The results indicate that the proposed method is successful in identifying intrusions. The experimental results show that the classification accuracy of the proposed method outperforms other classification techniques using SVM as the classifier and other dimensionality reduction or feature selection techniques. Minimum resources are consumed as the classifier input requires reduced feature set and thereby minimizing training and testing overhead time.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128084884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 65
Design of a secure architecture for context-aware Web Services using access control mechanism 为使用访问控制机制的上下文感知Web服务设计安全体系结构
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019678
P. Charles, S. B. R. Kumar
Services are expected to be a promising way for people to use information and computing resources in our emerging ubiquitous network society and cloud computing environments. Context aware computing attains environment monitoring by means of sensors to provide relevant information or services according to the identified context. In this paper, which explores recent findings in the implementation of Web Services in context-aware areas. The security issues that may surface are identified and methods of countering those security threats are proposed. The main emphasis is the challenge to design effective privacy and access control models for Context-Aware Web Services environment. Hence, there is a need that arises to design a security system for context-aware web services with the support of end-to-end security in business services between the service providers and service requesters. In view of this, a design for secure architecture for context-aware web services is proposed.
服务有望成为人们在新兴的无处不在的网络社会和云计算环境中使用信息和计算资源的一种有前途的方式。上下文感知计算通过传感器根据识别的上下文提供相关的信息或服务来实现环境监测。本文探讨了在上下文感知领域中实现Web服务的最新发现。指出了可能出现的安全问题,并提出了应对这些安全威胁的方法。重点是为上下文感知的Web服务环境设计有效的隐私和访问控制模型的挑战。因此,需要为上下文感知的web服务设计安全系统,并在服务提供者和服务请求者之间的业务服务中支持端到端安全性。鉴于此,提出了一种上下文感知web服务的安全体系结构设计。
{"title":"Design of a secure architecture for context-aware Web Services using access control mechanism","authors":"P. Charles, S. B. R. Kumar","doi":"10.1109/IC3I.2014.7019678","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019678","url":null,"abstract":"Services are expected to be a promising way for people to use information and computing resources in our emerging ubiquitous network society and cloud computing environments. Context aware computing attains environment monitoring by means of sensors to provide relevant information or services according to the identified context. In this paper, which explores recent findings in the implementation of Web Services in context-aware areas. The security issues that may surface are identified and methods of countering those security threats are proposed. The main emphasis is the challenge to design effective privacy and access control models for Context-Aware Web Services environment. Hence, there is a need that arises to design a security system for context-aware web services with the support of end-to-end security in business services between the service providers and service requesters. In view of this, a design for secure architecture for context-aware web services is proposed.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133807852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Testability of object-oriented systems: An AHP-based approach for prioritization of metrics 面向对象系统的可测试性:用于度量优先级的基于ahp的方法
Pub Date : 2014-11-01 DOI: 10.1109/IC3I.2014.7019595
Priyanksha Khanna
This paper investigates testability from the perspective of metrics used in an object-oriented system. The idea is to give an overview of object oriented design metrics with the prioritization of same keeping testability as the overall goal. We have used Analytic Hierarchy Process (AHP) method to attain which metric is mostly used and is best for testability.
本文从面向对象系统中使用的度量的角度来研究可测试性。这个想法是给出一个面向对象的设计指标的概述,同时优先考虑保持可测试性作为总体目标。我们使用层次分析法(AHP)来确定最常用和最适合测试性的度量。
{"title":"Testability of object-oriented systems: An AHP-based approach for prioritization of metrics","authors":"Priyanksha Khanna","doi":"10.1109/IC3I.2014.7019595","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019595","url":null,"abstract":"This paper investigates testability from the perspective of metrics used in an object-oriented system. The idea is to give an overview of object oriented design metrics with the prioritization of same keeping testability as the overall goal. We have used Analytic Hierarchy Process (AHP) method to attain which metric is mostly used and is best for testability.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"258 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115689280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
2014 International Conference on Contemporary Computing and Informatics (IC3I)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1