Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496457
G. T. Shrivakshan
This paper deals in classifying shark fishes using the Edges characterize boundaries. It is a problem of fundamental importance in detecting the type of shark fish in the deep sea. The edge detection is in the head of computer vision system for recognition of objects and estimate it is critical to have a good perceptive of edge detection techniques. In this paper the comparative analysis of various Image Edge Detection techniques are considered. The proposed work was tested in MATLAB tool. It has been shown that the Gabor's filter performs better than Sobel filter.
{"title":"An analysis of SOBEL and GABOR image filters for identifying fish","authors":"G. T. Shrivakshan","doi":"10.1109/ICPRIME.2013.6496457","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496457","url":null,"abstract":"This paper deals in classifying shark fishes using the Edges characterize boundaries. It is a problem of fundamental importance in detecting the type of shark fish in the deep sea. The edge detection is in the head of computer vision system for recognition of objects and estimate it is critical to have a good perceptive of edge detection techniques. In this paper the comparative analysis of various Image Edge Detection techniques are considered. The proposed work was tested in MATLAB tool. It has been shown that the Gabor's filter performs better than Sobel filter.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130910995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496717
M. G. Ananthara, T. Arunkumar, R. Hemavathy
Agricultural researchers over the world insist on the need for an efficient mechanism to predict and improve the crop growth. The need for an integrated crop growth control with accurate predictive yield management methodology is highly felt among farming community. The complexity of predicting the crop yield is highly due to multi dimensional variable metrics and unavailability of predictive modeling approach, which leads to loss in crop yield. This research paper suggests a crop yield prediction model (CRY) which works on an adaptive cluster approach over dynamically updated historical crop data set to predict the crop yield and improve the decision making in precision agriculture. CRY uses bee hive modeling approach to analyze and classify the crop based on crop growth pattern, yield. CRY classified dataset had been tested using Clementine over existing crop domain knowledge. The results and performance shows comparison of CRY over with other cluster approaches.
{"title":"CRY — An improved crop yield prediction model using bee hive clustering approach for agricultural data sets","authors":"M. G. Ananthara, T. Arunkumar, R. Hemavathy","doi":"10.1109/ICPRIME.2013.6496717","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496717","url":null,"abstract":"Agricultural researchers over the world insist on the need for an efficient mechanism to predict and improve the crop growth. The need for an integrated crop growth control with accurate predictive yield management methodology is highly felt among farming community. The complexity of predicting the crop yield is highly due to multi dimensional variable metrics and unavailability of predictive modeling approach, which leads to loss in crop yield. This research paper suggests a crop yield prediction model (CRY) which works on an adaptive cluster approach over dynamically updated historical crop data set to predict the crop yield and improve the decision making in precision agriculture. CRY uses bee hive modeling approach to analyze and classify the crop based on crop growth pattern, yield. CRY classified dataset had been tested using Clementine over existing crop domain knowledge. The results and performance shows comparison of CRY over with other cluster approaches.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126541737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496476
Amol D. Gaikwad, Ctech Deptt
Web service is a popular standard to publish services for users. However, diversified users need to access web service according to their particular preferences. Mobile search is quite different from standard PC-based web search in a number of ways: (a) the user interfaces and I/O are limited by screen real state, (b) key pads are tiny and inconvenient for use, (c) limited bandwidth and (d) costly connection fees. This review paper focuses on the personalization strategies which explicitly and implicitly infer user search context at individual user level. The paper also focuses on an architecture which collects user information (at mobile device and carrier network) and derives user intention in given situations.
{"title":"Personal approach for mobile search: A review","authors":"Amol D. Gaikwad, Ctech Deptt","doi":"10.1109/ICPRIME.2013.6496476","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496476","url":null,"abstract":"Web service is a popular standard to publish services for users. However, diversified users need to access web service according to their particular preferences. Mobile search is quite different from standard PC-based web search in a number of ways: (a) the user interfaces and I/O are limited by screen real state, (b) key pads are tiny and inconvenient for use, (c) limited bandwidth and (d) costly connection fees. This review paper focuses on the personalization strategies which explicitly and implicitly infer user search context at individual user level. The paper also focuses on an architecture which collects user information (at mobile device and carrier network) and derives user intention in given situations.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114197744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496465
D. N. Rao, M. Srinath, P. Hiranmani Bala
E-Learning has become a major field of interest in recent year, and multiple approaches and solutions have been developed. Testing in E-Iearning software is the most important way of assuring the quality of the application. The E-Learning software contains miscommunication or no communication, software complexity, programming errors, time pressures and changing requirements, there are too many unrealistic software which results in bugs. In order to remove or defuse the bugs that cause a lot of project failures at the final stage of the delivery., this paper focuses on adducing a Reliable code coverage technique in software testing, which will ensure a bug free delivery of the software development. Software testing aims at detecting error-prone areas. This helps in the detection and correction of errors. It can be applied at the unit of integration and system levels of the software testing process, and it is usually done at the unit level. This method of test design uncovered many errors or problems. Experimental results show that, the increase in software performance rating and software quality assurance increases the testing level in performance.
{"title":"Reliable code coverage technique in software testing","authors":"D. N. Rao, M. Srinath, P. Hiranmani Bala","doi":"10.1109/ICPRIME.2013.6496465","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496465","url":null,"abstract":"E-Learning has become a major field of interest in recent year, and multiple approaches and solutions have been developed. Testing in E-Iearning software is the most important way of assuring the quality of the application. The E-Learning software contains miscommunication or no communication, software complexity, programming errors, time pressures and changing requirements, there are too many unrealistic software which results in bugs. In order to remove or defuse the bugs that cause a lot of project failures at the final stage of the delivery., this paper focuses on adducing a Reliable code coverage technique in software testing, which will ensure a bug free delivery of the software development. Software testing aims at detecting error-prone areas. This helps in the detection and correction of errors. It can be applied at the unit of integration and system levels of the software testing process, and it is usually done at the unit level. This method of test design uncovered many errors or problems. Experimental results show that, the increase in software performance rating and software quality assurance increases the testing level in performance.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"64 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120899257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496519
P. Ashok, G. M. Kadhar Nawaz, K. Thangavel, E. Elayaraja
A large molecule composed of one or more chains of amino acids in a specific order, the order is determined by the base sequence of nucleotides in the gene that codes for the protein. Proteins are required for the structure, function, and regulation of the body's cells, tissues, and organs and each protein has unique functions. Localization sites of proteins are identified by the mechanism and moved to its corresponding organelles. In this paper, we introduce the method clustering and its type's K-Means and K-Medoids. The clustering algorithms are improved by implementing the two initial centroid selection methods instead of selecting centroid randomly. K-Means algorithm can be improved by implementing the initial cluster centroids are selected by the two proposed algorithms instead of selecting centroids randomly, which is compared by using Davie Bouldin index measure, hence the proposed algorithm1 overcomes the drawbacks of selecting initial cluster centers then other methods. In the yeast dataset, the defective proteins (objects) are considered as outliers, which are identified by the clustering methods with ADOC (Average Distance between Object and Centroid) function. The outlier's detection method and performance analysis method are studied and compared, the experimental results shows that the K-Medoids method performs well when compare with the K-Means clustering.
{"title":"Outliers detection on protein localization sites by partitional clustering methods","authors":"P. Ashok, G. M. Kadhar Nawaz, K. Thangavel, E. Elayaraja","doi":"10.1109/ICPRIME.2013.6496519","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496519","url":null,"abstract":"A large molecule composed of one or more chains of amino acids in a specific order, the order is determined by the base sequence of nucleotides in the gene that codes for the protein. Proteins are required for the structure, function, and regulation of the body's cells, tissues, and organs and each protein has unique functions. Localization sites of proteins are identified by the mechanism and moved to its corresponding organelles. In this paper, we introduce the method clustering and its type's K-Means and K-Medoids. The clustering algorithms are improved by implementing the two initial centroid selection methods instead of selecting centroid randomly. K-Means algorithm can be improved by implementing the initial cluster centroids are selected by the two proposed algorithms instead of selecting centroids randomly, which is compared by using Davie Bouldin index measure, hence the proposed algorithm1 overcomes the drawbacks of selecting initial cluster centers then other methods. In the yeast dataset, the defective proteins (objects) are considered as outliers, which are identified by the clustering methods with ADOC (Average Distance between Object and Centroid) function. The outlier's detection method and performance analysis method are studied and compared, the experimental results shows that the K-Medoids method performs well when compare with the K-Means clustering.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125642507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496506
R. M. Jeya Jothi, A. Amutha
A Graph G is Super Strongly Perfect Graph if every induced sub graph H of G possesses a minimal dominating set that meets all the maximal complete sub graphs of H. In this paper we have characterized the structure of super strongly perfect graphs in Prism and Rook's Networks. Along with this characterization, we have investigated the Super Strongly Perfect ness in Prism and Rook's Networks. Also we have given the relationship between diameter, domination and co-domination numbers of Prism Network.
{"title":"Super Strongly Perfect ness of Prism and Rook's Networks","authors":"R. M. Jeya Jothi, A. Amutha","doi":"10.1109/ICPRIME.2013.6496506","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496506","url":null,"abstract":"A Graph G is Super Strongly Perfect Graph if every induced sub graph H of G possesses a minimal dominating set that meets all the maximal complete sub graphs of H. In this paper we have characterized the structure of super strongly perfect graphs in Prism and Rook's Networks. Along with this characterization, we have investigated the Super Strongly Perfect ness in Prism and Rook's Networks. Also we have given the relationship between diameter, domination and co-domination numbers of Prism Network.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132611671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496505
A. N. Kumar, C. Sureshkumar
In video surveillance systems, background subtraction is the first processing stage and it is used to determine the objects in a particular scene. It is a general term for a process which aims to separate foreground objects from a relatively stationary background. It should be processed in real time. It is obtained in human detection system by computing the variation, pixel-by-pixel, between the current frame and the image of the background, followed by an automatic threshold. This paper proposed a K means based background subtraction for real time video processing in video surveillance. We have analyzed and evaluate the performance of the proposed method, with standard K-means and other background subtractions algorithms. The experimental results showed that the proposed method provides better output.
{"title":"Background subtraction based on threshold detection using modified K-means algorithm","authors":"A. N. Kumar, C. Sureshkumar","doi":"10.1109/ICPRIME.2013.6496505","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496505","url":null,"abstract":"In video surveillance systems, background subtraction is the first processing stage and it is used to determine the objects in a particular scene. It is a general term for a process which aims to separate foreground objects from a relatively stationary background. It should be processed in real time. It is obtained in human detection system by computing the variation, pixel-by-pixel, between the current frame and the image of the background, followed by an automatic threshold. This paper proposed a K means based background subtraction for real time video processing in video surveillance. We have analyzed and evaluate the performance of the proposed method, with standard K-means and other background subtractions algorithms. The experimental results showed that the proposed method provides better output.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133245258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496510
V. Sasirekha, M. Ilanzkumaran
The selection of the most appropriate network in heterogeneous Wireless environment is one of the critical issues to provide the best Quality of Service (QOS) to the users. The selection of an apt network among various alternatives is a kind of Multi Criteria Decision Making (MCDM) problem. This paper describes a novel Multi Criteria Decision Making (MCDM) method to evaluate and select the suitable network for homogeneous wireless network environment. The proposed MCDM technique involves Fuzzy Analytical Hierarchy Process (FAHP) is integrated with Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) and VlseKriterijumska Optimizacija I Kompromisno Resenje in Serbian (VIKOR) techniques. FAHP is used to determine the criteria weights, whereas TOPSIS and VIKOR used to find the performance ranking of the alternative networks. This study focuses on five network alternatives such as WLAN, GPRS, UMTS, WIMAX, and CDMA and ten evaluation criteria such as bandwidth, latency, jitter, BER, Retransmission, Packet loss, through put, preference, security, cost to select the appropriate network.
{"title":"Heterogeneous wireless network selection using FAHP integrated with TOPSIS and VIKOR","authors":"V. Sasirekha, M. Ilanzkumaran","doi":"10.1109/ICPRIME.2013.6496510","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496510","url":null,"abstract":"The selection of the most appropriate network in heterogeneous Wireless environment is one of the critical issues to provide the best Quality of Service (QOS) to the users. The selection of an apt network among various alternatives is a kind of Multi Criteria Decision Making (MCDM) problem. This paper describes a novel Multi Criteria Decision Making (MCDM) method to evaluate and select the suitable network for homogeneous wireless network environment. The proposed MCDM technique involves Fuzzy Analytical Hierarchy Process (FAHP) is integrated with Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) and VlseKriterijumska Optimizacija I Kompromisno Resenje in Serbian (VIKOR) techniques. FAHP is used to determine the criteria weights, whereas TOPSIS and VIKOR used to find the performance ranking of the alternative networks. This study focuses on five network alternatives such as WLAN, GPRS, UMTS, WIMAX, and CDMA and ten evaluation criteria such as bandwidth, latency, jitter, BER, Retransmission, Packet loss, through put, preference, security, cost to select the appropriate network.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133421155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496469
M. R. Devi, T. Ravichandran
Speech Pre-processing is measured as major step in development of feature vector extraction for an efficient Automatic Speech Recognition (ASR) system. A novel approach for speech feature extraction is by applying the Mel-frequency cepstral co-efficient (MFCC) algorithm using Cubic-Log compression instead of Logarithmic compression in MFCC. In proposed MFCC, the frequency axis is initially warped to the mel-scale which is roughly below 2 kHz and logarithmic above this point. Triangular filter are equally spaced in the mel-scale are applied on the warped spectrum. The result of the filters are compressed using Cubic-Log function and cepstral co-efficient are computed by applying DCT to obtain minimum MFCC feature vector for spoken words. These feature vectors are given as input to classification and Recognition phase. The system is trained and tested by generating MFCC feature vector for 600 isolated words, 256 connected words and 150 sentences in clear and noisy environment. Experiment results shows that with minimum MFCC feature vector is enough for speech recognition system to achieve high recognition rate and its performance is measured based on Mean Square Error (MSE) rate.
{"title":"A novel approach for speech feature extraction by Cubic-Log compression in MFCC","authors":"M. R. Devi, T. Ravichandran","doi":"10.1109/ICPRIME.2013.6496469","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496469","url":null,"abstract":"Speech Pre-processing is measured as major step in development of feature vector extraction for an efficient Automatic Speech Recognition (ASR) system. A novel approach for speech feature extraction is by applying the Mel-frequency cepstral co-efficient (MFCC) algorithm using Cubic-Log compression instead of Logarithmic compression in MFCC. In proposed MFCC, the frequency axis is initially warped to the mel-scale which is roughly below 2 kHz and logarithmic above this point. Triangular filter are equally spaced in the mel-scale are applied on the warped spectrum. The result of the filters are compressed using Cubic-Log function and cepstral co-efficient are computed by applying DCT to obtain minimum MFCC feature vector for spoken words. These feature vectors are given as input to classification and Recognition phase. The system is trained and tested by generating MFCC feature vector for 600 isolated words, 256 connected words and 150 sentences in clear and noisy environment. Experiment results shows that with minimum MFCC feature vector is enough for speech recognition system to achieve high recognition rate and its performance is measured based on Mean Square Error (MSE) rate.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133238584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-15DOI: 10.1109/ICPRIME.2013.6496518
S. Srinivasan, R. Krishnan
This paper inscribes a new approach for maintaining the data usage report for the cloud data storage using a novel data property analyzer. Cloud data storage is a technology that uses the internet and central remote servers to maintain data and share the applications. It allows consumer to use applications without installation and access their personal files at any computer with internet access. In general data property analysis system, Source and destination file content is compared in the form of bytes. In the cloud environment, data verification is needed for every computation in the storage correctness. So every time the data is retrieved from local system and compared with the destination file from the cloud zone. This procedure takes too much of time to find out a tiny change in the cloud file content. The proposed system is implemented with the idea of checking the file properties to find out the change in file content instead of verifying the entire file content. To check properties we have taken some of the file attributes such as file size, file modification date, file name and location.
{"title":"Data property analyzer for information storage in cloud","authors":"S. Srinivasan, R. Krishnan","doi":"10.1109/ICPRIME.2013.6496518","DOIUrl":"https://doi.org/10.1109/ICPRIME.2013.6496518","url":null,"abstract":"This paper inscribes a new approach for maintaining the data usage report for the cloud data storage using a novel data property analyzer. Cloud data storage is a technology that uses the internet and central remote servers to maintain data and share the applications. It allows consumer to use applications without installation and access their personal files at any computer with internet access. In general data property analysis system, Source and destination file content is compared in the form of bytes. In the cloud environment, data verification is needed for every computation in the storage correctness. So every time the data is retrieved from local system and compared with the destination file from the cloud zone. This procedure takes too much of time to find out a tiny change in the cloud file content. The proposed system is implemented with the idea of checking the file properties to find out the change in file content instead of verifying the entire file content. To check properties we have taken some of the file attributes such as file size, file modification date, file name and location.","PeriodicalId":123210,"journal":{"name":"2013 International Conference on Pattern Recognition, Informatics and Mobile Engineering","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122223411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}