Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019576
Sanath S. Shenoy, C. Vijeth
Currently Applications are being built using object oriented programming. A number of methods are used to retrieve data from objects. Technologies have also improved such that objects can be composed at runtime and used for further processing. In modern applications developed using object oriented languages such as Java, C++, C# etc, object composition and decomposition is done very frequently. This is because objects are created at runtime depending on configurable parameters or inputs from users. Most popular examples of object composition include class factories which use configuration to create different kinds of objects at runtime. A more complex example could include Generation of Mock objects using runtime object creation. The Mock objects are frequently used in Testing frameworks, however real objects are also created at runtime depending on the requirements of the application. In this paper we try to use a tree traversal algorithm and Runtime object creation technique to visualize them conveniently.
{"title":"A generic approach for runtime object creation and visualization","authors":"Sanath S. Shenoy, C. Vijeth","doi":"10.1109/IC3I.2014.7019576","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019576","url":null,"abstract":"Currently Applications are being built using object oriented programming. A number of methods are used to retrieve data from objects. Technologies have also improved such that objects can be composed at runtime and used for further processing. In modern applications developed using object oriented languages such as Java, C++, C# etc, object composition and decomposition is done very frequently. This is because objects are created at runtime depending on configurable parameters or inputs from users. Most popular examples of object composition include class factories which use configuration to create different kinds of objects at runtime. A more complex example could include Generation of Mock objects using runtime object creation. The Mock objects are frequently used in Testing frameworks, however real objects are also created at runtime depending on the requirements of the application. In this paper we try to use a tree traversal algorithm and Runtime object creation technique to visualize them conveniently.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132260598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019795
C. Joseph, Aswathy Wilson
Data mining is an emerging research area, because of the generation of large volume of data. The image mining is new branch of data mining, which deals with the analysis of image data. There is several methods for retrieving images from a large dataset. But they have some drawbacks. In this paper using image mining techniques like clustering and associations rules mining for mine the data from image. And also it uses the fusion of multimodal features like visual and textual. This system produces a better precise and recalls values.
{"title":"Retrieval of images using data mining techniques","authors":"C. Joseph, Aswathy Wilson","doi":"10.1109/IC3I.2014.7019795","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019795","url":null,"abstract":"Data mining is an emerging research area, because of the generation of large volume of data. The image mining is new branch of data mining, which deals with the analysis of image data. There is several methods for retrieving images from a large dataset. But they have some drawbacks. In this paper using image mining techniques like clustering and associations rules mining for mine the data from image. And also it uses the fusion of multimodal features like visual and textual. This system produces a better precise and recalls values.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127820249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019774
T.R Anusha, N. Hemavathi, K. Mahantesh, R. Chetana
Assigning a label pertaining to an image belonging to its category is defined as object taxonomy. In this paper, we propose a transform based descriptor which effectively extracts intensity gradients defining edge directions from segmented regions. Feature vectors comprising color, shape and texture information are obtained in compressed and de-correlated space. Firstly, Fuzzy c-means clustering is applied to an image in complex hybrid color space to obtain clusters based on color homogeneity of pixels. Further, HOG is employed on these clusters to extract discriminative features detecting local object appearance which is characterized with fine scale gradients at different orientation bins. To increase numerical stability, the obtained features are mapped onto local dimension feature space using PCA. For subsequent classification, diverse similarity measures and Neural networks are used to obtain an average correctness rate resulting in highly discriminative image classification. We demonstrated our proposed work on Caltech-101 and Caltech-256 datasets and obtained leading classification rates in comparison with several benchmarking techniques explored in literature.
{"title":"An investigation of combining gradient descriptor and diverse classifiers to improve object taxonomy in very large image dataset","authors":"T.R Anusha, N. Hemavathi, K. Mahantesh, R. Chetana","doi":"10.1109/IC3I.2014.7019774","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019774","url":null,"abstract":"Assigning a label pertaining to an image belonging to its category is defined as object taxonomy. In this paper, we propose a transform based descriptor which effectively extracts intensity gradients defining edge directions from segmented regions. Feature vectors comprising color, shape and texture information are obtained in compressed and de-correlated space. Firstly, Fuzzy c-means clustering is applied to an image in complex hybrid color space to obtain clusters based on color homogeneity of pixels. Further, HOG is employed on these clusters to extract discriminative features detecting local object appearance which is characterized with fine scale gradients at different orientation bins. To increase numerical stability, the obtained features are mapped onto local dimension feature space using PCA. For subsequent classification, diverse similarity measures and Neural networks are used to obtain an average correctness rate resulting in highly discriminative image classification. We demonstrated our proposed work on Caltech-101 and Caltech-256 datasets and obtained leading classification rates in comparison with several benchmarking techniques explored in literature.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126350398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019579
Priyanka Sharma, S. Ratnoo
This paper presents bottom-up Pittsburgh approach for discovery of classification rules. Population initialization makes use of entropy as the attribute significance measure and contains variable sized organizations. Each organization contains a set of IF-THEN rules. As bottom-up approach is employed, so traditional operators are not feasible and efficient to use. Therefore, four evolutionary operators are devised for realizing the evolutionary operations performed on organizations. Bottom-up Pittsburgh approach gives best set of rule having good accuracy. In experiments, the effectiveness of the proposed algorithm is evaluated by comparing the results of bottom-up Pittsburgh with and without entropy to the top-down Michigan approach with and without entropy on 10 datasets from the UCI and KEEL repository. All results show that bottom-up Pittsburgh approach achieves a higher predictive accuracy and is more consistent.
{"title":"Bottom-up Pittsburgh approach for discovery of classification rules","authors":"Priyanka Sharma, S. Ratnoo","doi":"10.1109/IC3I.2014.7019579","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019579","url":null,"abstract":"This paper presents bottom-up Pittsburgh approach for discovery of classification rules. Population initialization makes use of entropy as the attribute significance measure and contains variable sized organizations. Each organization contains a set of IF-THEN rules. As bottom-up approach is employed, so traditional operators are not feasible and efficient to use. Therefore, four evolutionary operators are devised for realizing the evolutionary operations performed on organizations. Bottom-up Pittsburgh approach gives best set of rule having good accuracy. In experiments, the effectiveness of the proposed algorithm is evaluated by comparing the results of bottom-up Pittsburgh with and without entropy to the top-down Michigan approach with and without entropy on 10 datasets from the UCI and KEEL repository. All results show that bottom-up Pittsburgh approach achieves a higher predictive accuracy and is more consistent.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"29 17","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121000014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019652
S. Shiwani, Sandeep Kumar, Vishal Chandra, Sunny Bansal
The extensive cruelty of proxies started years back with a plan known as Wingate. Earlier than when Windows had Internet connection sharing built in, populace through a home network required a technique to route all their machinery Internet traffic throughout a sole dialup. Wingate provided this reason, but regrettably it came with an insecure evasion configuration. Fundamentally anybody could join to your Wingate server plus telnet back out to an additional machine on an additional port. The corporation that wrote the software ultimately blocked the hole, but the innovative versions were extensively organized and uncommonly upgraded. Spiraling to the current day, we notice a subsequent development in proxy exercise Web traffic has developed at an extraordinary speed above the past 7 years. Corporations and ISPs often go round to caching proxy servers to decrease the wonderful load on their networks. In categorize to gratify the anxiety of their content-hungry users, these proxy servers are frequently configured to proxy any port, through small observe to security. We applied the proxy server in Linux and Windows in standard servers and Servers in Cloud.
{"title":"Performance measurements: Proxy server for various operating systems","authors":"S. Shiwani, Sandeep Kumar, Vishal Chandra, Sunny Bansal","doi":"10.1109/IC3I.2014.7019652","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019652","url":null,"abstract":"The extensive cruelty of proxies started years back with a plan known as Wingate. Earlier than when Windows had Internet connection sharing built in, populace through a home network required a technique to route all their machinery Internet traffic throughout a sole dialup. Wingate provided this reason, but regrettably it came with an insecure evasion configuration. Fundamentally anybody could join to your Wingate server plus telnet back out to an additional machine on an additional port. The corporation that wrote the software ultimately blocked the hole, but the innovative versions were extensively organized and uncommonly upgraded. Spiraling to the current day, we notice a subsequent development in proxy exercise Web traffic has developed at an extraordinary speed above the past 7 years. Corporations and ISPs often go round to caching proxy servers to decrease the wonderful load on their networks. In categorize to gratify the anxiety of their content-hungry users, these proxy servers are frequently configured to proxy any port, through small observe to security. We applied the proxy server in Linux and Windows in standard servers and Servers in Cloud.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124976450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper mainly brings out the details related to a prototype biodiversity information retrieval system that has been setup using the distributed Grid-Cloud resources of GARUDA Grid project, India. The overall experiment has been done with the help of open source biodiversity databases. The structure of these relational database tables are not standardized and are hosted on a variety of Database Management Systems (DBMS) at different Virtual Machines (VMs) which in general been assumed as geographically distributed. The front end of the end user system is an HTML interface which captures and redirects the user query to the application engine which has been built using python and made functional in master grid node. According to the received input, the python program interprets the data and generates a query in the Structured Query Language (SQL). This generated query is sent to the distributed remote database servers which channelizes to the local DBMS of the cloud's virtual machine and executes the SQL query. The end results are retrieved back by the master grid application engine and been displayed in a new HTML page.
{"title":"Experiments on information retrieval mechanisms for distributed biodiversity databases environment","authors":"Manavalan, S. Chattopadhyay, Mangala, Prahlada Rao B.B., Sarat Chandra Babu, Akhil Kulkarni","doi":"10.1109/IC3I.2014.7019650","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019650","url":null,"abstract":"This paper mainly brings out the details related to a prototype biodiversity information retrieval system that has been setup using the distributed Grid-Cloud resources of GARUDA Grid project, India. The overall experiment has been done with the help of open source biodiversity databases. The structure of these relational database tables are not standardized and are hosted on a variety of Database Management Systems (DBMS) at different Virtual Machines (VMs) which in general been assumed as geographically distributed. The front end of the end user system is an HTML interface which captures and redirects the user query to the application engine which has been built using python and made functional in master grid node. According to the received input, the python program interprets the data and generates a query in the Structured Query Language (SQL). This generated query is sent to the distributed remote database servers which channelizes to the local DBMS of the cloud's virtual machine and executes the SQL query. The end results are retrieved back by the master grid application engine and been displayed in a new HTML page.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125216944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019728
G. Talukdar, Pranjal Protim Borah, Arup Baruah
In each and every natural language nouns play a very important role. A subcategory of noun is proper noun. They represent the names of person, location, organization etc. The task of recognizing the proper nouns in a text and categorizing them into some classes such as person, location, organization and other is called Named Entity Recognition. This is a very essential step of many natural language processing applications that makes the process of information extraction easier. Named Entity Recognition (NER) in most of the Indian languages has been performed using rule-based, supervised and unsupervised approaches. In this work our target language is Assamese, the language spoken by most of the people in North-Eastern part of India and particularly in Assam. In Assamese language, Named Entity Recognition has been performed using the rule based and suffix stripping based approaches. Supervised learning technique is more useful and can be easily adapted to new domains compared to rule based approaches. This paper reports the first work in Assamese NER using a machine learning technique. In this paper Assamese Named Entity Recognition is performed using Naïve Bayes classifier. Since feature extraction plays the most important role in getting better performance in any machine learning technique, in this work our aim is to put forward a description of a few important features related to Assamese NER and performance measure of the system using these features.
{"title":"Supervised named entity recognition in Assamese language","authors":"G. Talukdar, Pranjal Protim Borah, Arup Baruah","doi":"10.1109/IC3I.2014.7019728","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019728","url":null,"abstract":"In each and every natural language nouns play a very important role. A subcategory of noun is proper noun. They represent the names of person, location, organization etc. The task of recognizing the proper nouns in a text and categorizing them into some classes such as person, location, organization and other is called Named Entity Recognition. This is a very essential step of many natural language processing applications that makes the process of information extraction easier. Named Entity Recognition (NER) in most of the Indian languages has been performed using rule-based, supervised and unsupervised approaches. In this work our target language is Assamese, the language spoken by most of the people in North-Eastern part of India and particularly in Assam. In Assamese language, Named Entity Recognition has been performed using the rule based and suffix stripping based approaches. Supervised learning technique is more useful and can be easily adapted to new domains compared to rule based approaches. This paper reports the first work in Assamese NER using a machine learning technique. In this paper Assamese Named Entity Recognition is performed using Naïve Bayes classifier. Since feature extraction plays the most important role in getting better performance in any machine learning technique, in this work our aim is to put forward a description of a few important features related to Assamese NER and performance measure of the system using these features.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121779530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019692
I. Thaseen, C. Kumar
Intrusion detection systems (IDS) play a major role in detecting the attacks that occur in the computer or networks. Anomaly intrusion detection models detect new attacks by observing the deviation from profile. However there are many problems in the traditional IDS such as high false alarm rate, low detection capability against new network attacks and insufficient analysis capacity. The use of machine learning for intrusion models automatically increases the performance with an improved experience. This paper proposes a novel method of integrating principal component analysis (PCA) and support vector machine (SVM) by optimizing the kernel parameters using automatic parameter selection technique. This technique reduces the training and testing time to identify intrusions thereby improving the accuracy. The proposed method was tested on KDD data set. The datasets were carefully divided into training and testing considering the minority attacks such as U2R and R2L to be present in the testing set to identify the occurrence of unknown attack. The results indicate that the proposed method is successful in identifying intrusions. The experimental results show that the classification accuracy of the proposed method outperforms other classification techniques using SVM as the classifier and other dimensionality reduction or feature selection techniques. Minimum resources are consumed as the classifier input requires reduced feature set and thereby minimizing training and testing overhead time.
{"title":"Intrusion detection model using fusion of PCA and optimized SVM","authors":"I. Thaseen, C. Kumar","doi":"10.1109/IC3I.2014.7019692","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019692","url":null,"abstract":"Intrusion detection systems (IDS) play a major role in detecting the attacks that occur in the computer or networks. Anomaly intrusion detection models detect new attacks by observing the deviation from profile. However there are many problems in the traditional IDS such as high false alarm rate, low detection capability against new network attacks and insufficient analysis capacity. The use of machine learning for intrusion models automatically increases the performance with an improved experience. This paper proposes a novel method of integrating principal component analysis (PCA) and support vector machine (SVM) by optimizing the kernel parameters using automatic parameter selection technique. This technique reduces the training and testing time to identify intrusions thereby improving the accuracy. The proposed method was tested on KDD data set. The datasets were carefully divided into training and testing considering the minority attacks such as U2R and R2L to be present in the testing set to identify the occurrence of unknown attack. The results indicate that the proposed method is successful in identifying intrusions. The experimental results show that the classification accuracy of the proposed method outperforms other classification techniques using SVM as the classifier and other dimensionality reduction or feature selection techniques. Minimum resources are consumed as the classifier input requires reduced feature set and thereby minimizing training and testing overhead time.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128084884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019678
P. Charles, S. B. R. Kumar
Services are expected to be a promising way for people to use information and computing resources in our emerging ubiquitous network society and cloud computing environments. Context aware computing attains environment monitoring by means of sensors to provide relevant information or services according to the identified context. In this paper, which explores recent findings in the implementation of Web Services in context-aware areas. The security issues that may surface are identified and methods of countering those security threats are proposed. The main emphasis is the challenge to design effective privacy and access control models for Context-Aware Web Services environment. Hence, there is a need that arises to design a security system for context-aware web services with the support of end-to-end security in business services between the service providers and service requesters. In view of this, a design for secure architecture for context-aware web services is proposed.
{"title":"Design of a secure architecture for context-aware Web Services using access control mechanism","authors":"P. Charles, S. B. R. Kumar","doi":"10.1109/IC3I.2014.7019678","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019678","url":null,"abstract":"Services are expected to be a promising way for people to use information and computing resources in our emerging ubiquitous network society and cloud computing environments. Context aware computing attains environment monitoring by means of sensors to provide relevant information or services according to the identified context. In this paper, which explores recent findings in the implementation of Web Services in context-aware areas. The security issues that may surface are identified and methods of countering those security threats are proposed. The main emphasis is the challenge to design effective privacy and access control models for Context-Aware Web Services environment. Hence, there is a need that arises to design a security system for context-aware web services with the support of end-to-end security in business services between the service providers and service requesters. In view of this, a design for secure architecture for context-aware web services is proposed.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133807852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019595
Priyanksha Khanna
This paper investigates testability from the perspective of metrics used in an object-oriented system. The idea is to give an overview of object oriented design metrics with the prioritization of same keeping testability as the overall goal. We have used Analytic Hierarchy Process (AHP) method to attain which metric is mostly used and is best for testability.
{"title":"Testability of object-oriented systems: An AHP-based approach for prioritization of metrics","authors":"Priyanksha Khanna","doi":"10.1109/IC3I.2014.7019595","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019595","url":null,"abstract":"This paper investigates testability from the perspective of metrics used in an object-oriented system. The idea is to give an overview of object oriented design metrics with the prioritization of same keeping testability as the overall goal. We have used Analytic Hierarchy Process (AHP) method to attain which metric is mostly used and is best for testability.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"258 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115689280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}