Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019733
Sadia Syed, P. Teja
Cloud computing has the advantage that it offers companies unlimited data storage at attractive costs. However, it also introduces new challenges for protecting the confidentiality of the data, and the access to the data. Sensitive data like medical records, business or governmental data cannot be stored unencrypted on the cloud. Moreover, they can be of interest to many users and different policies could apply to each. Companies need new mechanisms to query the encrypted data without revealing anything to the cloud server, and to enforce access policies to the data. Current security schemes do not allow complex encrypted queries over encrypted data in a multi-user setting. Instead, they are limited to keyword searches. Moreover, current solutions assume that all users have the same access rights to the data. This paper shows the implementation of a scheme that allows making SQL-like queries on encrypted databases in a multi-user setting, while at the same time allowing the database owner to assign different access rights to users.we address these issues by combining cloud computing technologies and Attribute Based Encryption for Secure storage and efficient retrieval of Data from the Databases. Here the Attribute is the Frequent access Node in the database which can be Encrypted for Secure Storage and Retrieval. Using database encryption to protect data in some situations where access control is not solely enough is inevitable. Database encryption provides an additional layer of protection to conventional access control techniques. It prevents unauthorized users, including intruders breaking into a network, from viewing the sensitive data. As a result data keeps protected even in the incident that database is successfully attacked or stolen. However, data encryption and decryption process result in database performance degradation. In the situation where all the information is stored in encrypted form, one cannot make the selection on the database content any more. Data should be decrypted first, so an unwilling tradeoff between the security and the performance is normally forced. We present our approach for a multi-level threshold attribute based encryption scheme whose cipher text size depends only on the size of the policy and is independent of the number of attributes. The attribute can be taken as the Very frequent Accessing Node in the Database.
{"title":"Novel data storage and retrieval in cloud database by using frequent access node encryption","authors":"Sadia Syed, P. Teja","doi":"10.1109/IC3I.2014.7019733","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019733","url":null,"abstract":"Cloud computing has the advantage that it offers companies unlimited data storage at attractive costs. However, it also introduces new challenges for protecting the confidentiality of the data, and the access to the data. Sensitive data like medical records, business or governmental data cannot be stored unencrypted on the cloud. Moreover, they can be of interest to many users and different policies could apply to each. Companies need new mechanisms to query the encrypted data without revealing anything to the cloud server, and to enforce access policies to the data. Current security schemes do not allow complex encrypted queries over encrypted data in a multi-user setting. Instead, they are limited to keyword searches. Moreover, current solutions assume that all users have the same access rights to the data. This paper shows the implementation of a scheme that allows making SQL-like queries on encrypted databases in a multi-user setting, while at the same time allowing the database owner to assign different access rights to users.we address these issues by combining cloud computing technologies and Attribute Based Encryption for Secure storage and efficient retrieval of Data from the Databases. Here the Attribute is the Frequent access Node in the database which can be Encrypted for Secure Storage and Retrieval. Using database encryption to protect data in some situations where access control is not solely enough is inevitable. Database encryption provides an additional layer of protection to conventional access control techniques. It prevents unauthorized users, including intruders breaking into a network, from viewing the sensitive data. As a result data keeps protected even in the incident that database is successfully attacked or stolen. However, data encryption and decryption process result in database performance degradation. In the situation where all the information is stored in encrypted form, one cannot make the selection on the database content any more. Data should be decrypted first, so an unwilling tradeoff between the security and the performance is normally forced. We present our approach for a multi-level threshold attribute based encryption scheme whose cipher text size depends only on the size of the policy and is independent of the number of attributes. The attribute can be taken as the Very frequent Accessing Node in the Database.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131499372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019614
S. U. Mageswari, C. Mala
Image segmentation is a primary stage in image processing for identifying objects of interest. Segmentation methods are classified into region based, transform based, edge based and clustering based segmentation. In this paper, segmentation methods including histogram, watershed, Canny edge detector and K-means clustering techniques are studied and analyzed. The experimental results obtained are compared with different evaluation measures including three standard image segmentation indices: rand index, globally consistency error and variation of information.
{"title":"Analysis and performance evaluation of various image segmentation methods","authors":"S. U. Mageswari, C. Mala","doi":"10.1109/IC3I.2014.7019614","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019614","url":null,"abstract":"Image segmentation is a primary stage in image processing for identifying objects of interest. Segmentation methods are classified into region based, transform based, edge based and clustering based segmentation. In this paper, segmentation methods including histogram, watershed, Canny edge detector and K-means clustering techniques are studied and analyzed. The experimental results obtained are compared with different evaluation measures including three standard image segmentation indices: rand index, globally consistency error and variation of information.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131315494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019635
C. Aravinda, H. Prakash
Handwriting recognition systems have been developed out of a need to automate the process of converting data into electronic format, which otherwise would have been lengthy and error prone. As we all know that building a character recognition system is one of the major areas of research over a decade, due to its wide range of prospects. Various techniques have been discussed by many researchers regarding the recognition of handwritten characters for different languages. In this paper we adopted a Correlation Technique for recognition of Kannada Handwritten Characters. The formation of Kannada Characters into its compound form, also called as Kagunita makes its recognition more complex. The digitized input image is subjected to various preprocessing techniques and the processed image is then segmented into individual characters using simple segmentation algorithm. The segmented individual character is correlated with the stored templates. The template with maximum correlation value is displayed in editable format.
{"title":"Template matching method for Kannada Handwritten recognition based on correlation analysis","authors":"C. Aravinda, H. Prakash","doi":"10.1109/IC3I.2014.7019635","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019635","url":null,"abstract":"Handwriting recognition systems have been developed out of a need to automate the process of converting data into electronic format, which otherwise would have been lengthy and error prone. As we all know that building a character recognition system is one of the major areas of research over a decade, due to its wide range of prospects. Various techniques have been discussed by many researchers regarding the recognition of handwritten characters for different languages. In this paper we adopted a Correlation Technique for recognition of Kannada Handwritten Characters. The formation of Kannada Characters into its compound form, also called as Kagunita makes its recognition more complex. The digitized input image is subjected to various preprocessing techniques and the processed image is then segmented into individual characters using simple segmentation algorithm. The segmented individual character is correlated with the stored templates. The template with maximum correlation value is displayed in editable format.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125493252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019755
Pushp Maheshwari, Awadhesh Kumar Singh
The last decade has witnessed tremendous increase in the use of wireless applications in all walks of life. Consequently, the availability of treasurable and limited electromagnetic radio spectrum has emerged as a major challenge. The escalated demand of this limited natural resource puts a number of constraints on its usage. Because of inefficient spectrum usage and ineffective utilization, a new communication model, namely cognitive radios has been developed to exploit the available spectrum in an opportunistic manner. In order to use the vacant or underutilized frequency bands (called spectrum holes, henceforth) opportunistically, the cognitive radios may have to switch the frequency band quite often. This may lead to application discontinuity and performance degradation. Thus, it is desired to have efficient techniques to handle the spectrum handoff. In this paper, we present a survey of spectrum handoff techniques with a brief overview of cognitive radio technology. The cognitive user throughput and handoff delay are the two popular parameters of interest for comparing different handoff techniques.
{"title":"A survey on spectrum handoff techniques in cognitive radio networks","authors":"Pushp Maheshwari, Awadhesh Kumar Singh","doi":"10.1109/IC3I.2014.7019755","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019755","url":null,"abstract":"The last decade has witnessed tremendous increase in the use of wireless applications in all walks of life. Consequently, the availability of treasurable and limited electromagnetic radio spectrum has emerged as a major challenge. The escalated demand of this limited natural resource puts a number of constraints on its usage. Because of inefficient spectrum usage and ineffective utilization, a new communication model, namely cognitive radios has been developed to exploit the available spectrum in an opportunistic manner. In order to use the vacant or underutilized frequency bands (called spectrum holes, henceforth) opportunistically, the cognitive radios may have to switch the frequency band quite often. This may lead to application discontinuity and performance degradation. Thus, it is desired to have efficient techniques to handle the spectrum handoff. In this paper, we present a survey of spectrum handoff techniques with a brief overview of cognitive radio technology. The cognitive user throughput and handoff delay are the two popular parameters of interest for comparing different handoff techniques.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130570504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019693
G. Sundararaj, V. Balamurugan
Computer Tomography (CT) Images are widely used in the intracranical hematoma and hemorrhage. In this paper we have developed a new approach for automatic classification of brain tumor in CT images. The proposed method consists of four stages namely preprocessing, feature extraction, feature reduction and classification. In the first stage Gaussian filter is applied for noise reduction and to make the image suitable for extracting the features. In the second stage, various texture and intensity based features are extracted for classification. In the next stage principal component analysis (PCA) is used to reduce the dimensionality of the feature space which results in a more efficient and accurate classification. In the classification stage, two classifiers are used for classify the experimental images into normal and abnormal. The first classifier is based on k-nearest neighbour and second is Linear SVM. The obtained experimental are evaluated using the metric similarity index (SI), overlap fraction (OF), and extra fraction (EF). For comparison, the performance of the proposed technique has significantly improved the tumor detection accuracy with other neural network based classifier.
{"title":"Robust classification of primary brain tumor in Computer Tomography images using K-NN and linear SVM","authors":"G. Sundararaj, V. Balamurugan","doi":"10.1109/IC3I.2014.7019693","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019693","url":null,"abstract":"Computer Tomography (CT) Images are widely used in the intracranical hematoma and hemorrhage. In this paper we have developed a new approach for automatic classification of brain tumor in CT images. The proposed method consists of four stages namely preprocessing, feature extraction, feature reduction and classification. In the first stage Gaussian filter is applied for noise reduction and to make the image suitable for extracting the features. In the second stage, various texture and intensity based features are extracted for classification. In the next stage principal component analysis (PCA) is used to reduce the dimensionality of the feature space which results in a more efficient and accurate classification. In the classification stage, two classifiers are used for classify the experimental images into normal and abnormal. The first classifier is based on k-nearest neighbour and second is Linear SVM. The obtained experimental are evaluated using the metric similarity index (SI), overlap fraction (OF), and extra fraction (EF). For comparison, the performance of the proposed technique has significantly improved the tumor detection accuracy with other neural network based classifier.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130483716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019717
P. Shobana Devi, Divya Das, J. Stephen, V. K. Bhadran
Speech corpora are vital resource in development and evaluation of automatic speech recognition systems, as well as for acoustic phonetic studies. Collecting a huge corpus is not an easy task. The lack of such resources is one of the reasons for the absence of good quality speech recognition systems in Indian languages. Here we have automated such process by developing web based tool for collecting broad band speech data and an IVR system with speech recognition for collecting narrow band speech data. The main features includes the full support for the typical recording, annotation and project administration workflow, easy editing of the speech content, with an advantage of a fully localizable user interface. This paper describes in detail the development of web based speech collection tool and an IVR system which will enable end-to-end building of speech corpus with minimum manual effort.
{"title":"Web based and voice enabled IVRS for large scale Malayalam speech data collection","authors":"P. Shobana Devi, Divya Das, J. Stephen, V. K. Bhadran","doi":"10.1109/IC3I.2014.7019717","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019717","url":null,"abstract":"Speech corpora are vital resource in development and evaluation of automatic speech recognition systems, as well as for acoustic phonetic studies. Collecting a huge corpus is not an easy task. The lack of such resources is one of the reasons for the absence of good quality speech recognition systems in Indian languages. Here we have automated such process by developing web based tool for collecting broad band speech data and an IVR system with speech recognition for collecting narrow band speech data. The main features includes the full support for the typical recording, annotation and project administration workflow, easy editing of the speech content, with an advantage of a fully localizable user interface. This paper describes in detail the development of web based speech collection tool and an IVR system which will enable end-to-end building of speech corpus with minimum manual effort.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124611897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019619
S. C. Mouliswaran, Ch. Aswani Kumar, C. Chandrasekar
Chinese wall access control (CWAC) is a well known and suitable access control model for secured sharing of commercial consultancy services. It is to avoid the information flow which causes conflict of interest for every individual consultant in these services. The main objective is to model the Chinese wall access control policy using formal concept analysis which extends and restructures the lattice theory. To attain this goal, we develop a formal context in the security aspects of Chinese wall access permissions. We experiment the proposed method in a common commercial consultancy service sharing scenario. The analysis results confirms that the proposed method satisfies the constraints of Chinese wall security policy and its properties such as simple security and *-property.
{"title":"Modeling Chinese wall access control using formal concept analysis","authors":"S. C. Mouliswaran, Ch. Aswani Kumar, C. Chandrasekar","doi":"10.1109/IC3I.2014.7019619","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019619","url":null,"abstract":"Chinese wall access control (CWAC) is a well known and suitable access control model for secured sharing of commercial consultancy services. It is to avoid the information flow which causes conflict of interest for every individual consultant in these services. The main objective is to model the Chinese wall access control policy using formal concept analysis which extends and restructures the lattice theory. To attain this goal, we develop a formal context in the security aspects of Chinese wall access permissions. We experiment the proposed method in a common commercial consultancy service sharing scenario. The analysis results confirms that the proposed method satisfies the constraints of Chinese wall security policy and its properties such as simple security and *-property.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133789906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019695
G. Tanwar, A. S. Poonia
Many more corporate entities today are utilizing ICTs to identify opportunities for innovative and customer-centric, value-added products and services. Indeed, information systems have been key characteristic of any growing and successful businesses, as they utilize ICTs for business value creation. The key motivation for the huge investment in IT infrastructures is to ensure an upsurge in revenue and retention of sizeable market share. Computer Usage policy is a document that provides guidelines that regulates the acceptable usage of these systems by end- users. The provision of these guidelines also serve as benchmark metrics in assessing the abuse or misuse of corporate information systems. These misuse and/or abuse are referred to as violations of computer usage in this study. 10 users, selected randomly from within each unit of a multi-lateral company, were observed for violations. Live computer forensics techniques utilizing EnCase, Microsoft reporting tools, WinHex, etc., were employed to investigate these violations. Notwithstanding the strict corporate policies, the study revealed that end-users virtually violated all computer usage policies. This paper further analyses and addresses the causes, effects and offers measures to mitigate computer usage violations.
{"title":"Live forensics analysis: Violations of business security policy","authors":"G. Tanwar, A. S. Poonia","doi":"10.1109/IC3I.2014.7019695","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019695","url":null,"abstract":"Many more corporate entities today are utilizing ICTs to identify opportunities for innovative and customer-centric, value-added products and services. Indeed, information systems have been key characteristic of any growing and successful businesses, as they utilize ICTs for business value creation. The key motivation for the huge investment in IT infrastructures is to ensure an upsurge in revenue and retention of sizeable market share. Computer Usage policy is a document that provides guidelines that regulates the acceptable usage of these systems by end- users. The provision of these guidelines also serve as benchmark metrics in assessing the abuse or misuse of corporate information systems. These misuse and/or abuse are referred to as violations of computer usage in this study. 10 users, selected randomly from within each unit of a multi-lateral company, were observed for violations. Live computer forensics techniques utilizing EnCase, Microsoft reporting tools, WinHex, etc., were employed to investigate these violations. Notwithstanding the strict corporate policies, the study revealed that end-users virtually violated all computer usage policies. This paper further analyses and addresses the causes, effects and offers measures to mitigate computer usage violations.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132188807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, by the application of evolutionary multi-objective optimization (EMO) technique, we will be establishing a trade-off balance between the two conflicting aspects i.e. complexity of a software and the usability (business value) of the software.
{"title":"A trade-off establishment between software complexity and its usability using evolutionary multi-objective optimization (EMO)","authors":"Vandana Yadav, Siddharth Lavania, Arun Chaudhary, Namrata Dhanda","doi":"10.1109/IC3I.2014.7019730","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019730","url":null,"abstract":"In this paper, by the application of evolutionary multi-objective optimization (EMO) technique, we will be establishing a trade-off balance between the two conflicting aspects i.e. complexity of a software and the usability (business value) of the software.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132316470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-11-01DOI: 10.1109/IC3I.2014.7019764
P. R. Induchoodan, M. J. Josemartin, P. R. Geetharanjin
In machine vision applications, distance or depth is an important factor. This paper describes stereoscopic depth calculation method by using images by two identical cameras separated by a small distance. This method requires calibration of cameras and rectification, an important step which is required for the matching of the images captured by two cameras. Using this stereo matching technique disparity is calculated. This is directly related to the depth. The proposed method is very much useful for planetary vision, autopilots, etc.
{"title":"Depth recovery from stereo images","authors":"P. R. Induchoodan, M. J. Josemartin, P. R. Geetharanjin","doi":"10.1109/IC3I.2014.7019764","DOIUrl":"https://doi.org/10.1109/IC3I.2014.7019764","url":null,"abstract":"In machine vision applications, distance or depth is an important factor. This paper describes stereoscopic depth calculation method by using images by two identical cameras separated by a small distance. This method requires calibration of cameras and rectification, an important step which is required for the matching of the images captured by two cameras. Using this stereo matching technique disparity is calculated. This is directly related to the depth. The proposed method is very much useful for planetary vision, autopilots, etc.","PeriodicalId":430848,"journal":{"name":"2014 International Conference on Contemporary Computing and Informatics (IC3I)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130074050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}