Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996129
K. Satyanarayana, Balagoni Abhinov
Fiber to the Home (FTTH) is broadly acknowledged as the optimal solution for meeting the current and future broadband requirements. FTTH involves the installation of optical fiber network from a central service provider to individual users. An innovative technology for FTTH implementation is Passive Optical Network (PON), which provides unprecedented bandwidth. The attributes of modern passive optical network (PON) such as high reliability, low cost, passive functionality are mainly responsible for the proliferation of optical fiber access. The recent trends in fiber access passive optical networks include Ethernet Passive Optical Network (EPON), Gigabit Passive Optical Network (GPON) and Wavelength Division Multiplexing Passive Optical Network (WDM PON). This paper focuses on PON such as GPON and next generation WDM PON. This paper discusses key technological aspects, and challenges of GPON, and next generation WDM PON and aims to provide insights to a reader by doing a comparative study of the two fiber access passive networks on the basis of cost, capacity and flexibility.
{"title":"Recent trends in future proof fiber access passive networks: GPON and WDM PON","authors":"K. Satyanarayana, Balagoni Abhinov","doi":"10.1109/ICRTIT.2014.6996129","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996129","url":null,"abstract":"Fiber to the Home (FTTH) is broadly acknowledged as the optimal solution for meeting the current and future broadband requirements. FTTH involves the installation of optical fiber network from a central service provider to individual users. An innovative technology for FTTH implementation is Passive Optical Network (PON), which provides unprecedented bandwidth. The attributes of modern passive optical network (PON) such as high reliability, low cost, passive functionality are mainly responsible for the proliferation of optical fiber access. The recent trends in fiber access passive optical networks include Ethernet Passive Optical Network (EPON), Gigabit Passive Optical Network (GPON) and Wavelength Division Multiplexing Passive Optical Network (WDM PON). This paper focuses on PON such as GPON and next generation WDM PON. This paper discusses key technological aspects, and challenges of GPON, and next generation WDM PON and aims to provide insights to a reader by doing a comparative study of the two fiber access passive networks on the basis of cost, capacity and flexibility.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122344247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996186
P. Muneeswaran, P. Velvizhy, A. Kannan
Most of the real world applications use data clustering techniques for effective data analysis. All clustering techniques have some assumptions on the underlying dataset. We can get accurate clusters if the assumptions hold good. But it is difficult to satisfy all assumptions. Currently, not a single clustering algorithm is available to find all types of cluster shapes and structures. Therefore, an ensemble clustering algorithm is proposed in this paper in order to produce accurate clusters. Moreover, the existing clustering ensemble methods require more number of clusters in advance to produce final clusters. In this paper, we propose a novel method which groups a set of clusters into accurate final clusters to enhance the decision accuracy. This method does not need the number of clusters as input but produces the clusters automatically assuming the no of clusters.
{"title":"Clustering fusion with automatic cluster number","authors":"P. Muneeswaran, P. Velvizhy, A. Kannan","doi":"10.1109/ICRTIT.2014.6996186","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996186","url":null,"abstract":"Most of the real world applications use data clustering techniques for effective data analysis. All clustering techniques have some assumptions on the underlying dataset. We can get accurate clusters if the assumptions hold good. But it is difficult to satisfy all assumptions. Currently, not a single clustering algorithm is available to find all types of cluster shapes and structures. Therefore, an ensemble clustering algorithm is proposed in this paper in order to produce accurate clusters. Moreover, the existing clustering ensemble methods require more number of clusters in advance to produce final clusters. In this paper, we propose a novel method which groups a set of clusters into accurate final clusters to enhance the decision accuracy. This method does not need the number of clusters as input but produces the clusters automatically assuming the no of clusters.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121120703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996143
A. Bharath, Dhananjay Kumar
User assisted segmentation of lung parenchyma pathology bearing regions becomes difficult with an enormous volume of images. A novel technique using Bayesian Network Model Based (BNMB) Image Segmentation, which is a probabilistic graphical model for segmentation of lung tissues from the X-ray Computed Tomography (CT) images of chest, is proposed. Goal of this work is to present an automated approach to segmentation of lung parenchyma from the rest of chest CT image. This is implemented with help of a probabilistic graph construction from an over-segmentation of the image to represent the relations between the super pixel regions and edge segments. Using an iterative procedure based on the probabilistic model, we identify regions and then these regions are merged. The BNMB is evaluated on many CT image databases and the result shows higher accuracy and efficiency for both segmenting the CT image of lung and also extraction of the Region Of Interest (ROI) from affected CT image.
{"title":"An improved Bayesian Network Model Based Image Segmentation in detection of lung cancer","authors":"A. Bharath, Dhananjay Kumar","doi":"10.1109/ICRTIT.2014.6996143","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996143","url":null,"abstract":"User assisted segmentation of lung parenchyma pathology bearing regions becomes difficult with an enormous volume of images. A novel technique using Bayesian Network Model Based (BNMB) Image Segmentation, which is a probabilistic graphical model for segmentation of lung tissues from the X-ray Computed Tomography (CT) images of chest, is proposed. Goal of this work is to present an automated approach to segmentation of lung parenchyma from the rest of chest CT image. This is implemented with help of a probabilistic graph construction from an over-segmentation of the image to represent the relations between the super pixel regions and edge segments. Using an iterative procedure based on the probabilistic model, we identify regions and then these regions are merged. The BNMB is evaluated on many CT image databases and the result shows higher accuracy and efficiency for both segmenting the CT image of lung and also extraction of the Region Of Interest (ROI) from affected CT image.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116292855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996141
K. Usha, M. Ezhilarasan
In real time biometric based authentication environments, wavelet based functions are widely incorporated as one of the promising methods for feature extraction of biometric traits. In this paper, we propose a novel finger knuckle print (FKP) recognition technique based on Haar-Wavelet Transform (HWT). Haar - Wavelet transform is used to transform the original knuckle image into a subset of its feature space known as `Eigen Knuckle'. The principle components and local space variations are extracted and represented in the form of Eigen vectors. Matching of a knuckle images for personal identification is done by means of a classifier using correlation. Matching scores obtained from various finger knuckles of the same person are fused by means of sum-weighting rule of matching score level fusion. From the exhaustive experiments conducted using two publically available database for FKP, viz. PolyU FKP database and IIT FKP database, it has been found that the proposed HWT based feature extraction algorithm produces high recognition rate when compared to the existing transform based methods of FKP recognition.
{"title":"Haar-Wavelet Transform based finger knukle print recognition","authors":"K. Usha, M. Ezhilarasan","doi":"10.1109/ICRTIT.2014.6996141","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996141","url":null,"abstract":"In real time biometric based authentication environments, wavelet based functions are widely incorporated as one of the promising methods for feature extraction of biometric traits. In this paper, we propose a novel finger knuckle print (FKP) recognition technique based on Haar-Wavelet Transform (HWT). Haar - Wavelet transform is used to transform the original knuckle image into a subset of its feature space known as `Eigen Knuckle'. The principle components and local space variations are extracted and represented in the form of Eigen vectors. Matching of a knuckle images for personal identification is done by means of a classifier using correlation. Matching scores obtained from various finger knuckles of the same person are fused by means of sum-weighting rule of matching score level fusion. From the exhaustive experiments conducted using two publically available database for FKP, viz. PolyU FKP database and IIT FKP database, it has been found that the proposed HWT based feature extraction algorithm produces high recognition rate when compared to the existing transform based methods of FKP recognition.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115560847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996196
S. Ganesan, Arul Isai Udhaya Sivaneri, S. Selvaraju
Any Web site may have the continuous improvement based on the getting information of the users' needs. There is a step to achieve it by the collection of users' search data and analysis of those data. The swarm intelligence technique of Particle Swarm Optimization(PSO) is applied for evolving similar user groups. PSO is important to identify the web users' travels with the same interests. The data set comprises of web log files obtained by collecting the user logs during a six month period. The PSO algorithm is attempted for user categorization. In web search, the users are grouped into different categories based on their similar travels. The grouping performance of the PSO technique is compared with the techniques of DBSCAN and Kmeans.
{"title":"Evolving interest based user groups using PSO algorithm","authors":"S. Ganesan, Arul Isai Udhaya Sivaneri, S. Selvaraju","doi":"10.1109/ICRTIT.2014.6996196","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996196","url":null,"abstract":"Any Web site may have the continuous improvement based on the getting information of the users' needs. There is a step to achieve it by the collection of users' search data and analysis of those data. The swarm intelligence technique of Particle Swarm Optimization(PSO) is applied for evolving similar user groups. PSO is important to identify the web users' travels with the same interests. The data set comprises of web log files obtained by collecting the user logs during a six month period. The PSO algorithm is attempted for user categorization. In web search, the users are grouped into different categories based on their similar travels. The grouping performance of the PSO technique is compared with the techniques of DBSCAN and Kmeans.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114956705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996169
E. B. Lydia, C. Duraipandi, Anju Pratap, Uthariaraj V. Rhymend
Watermarking is the most smart and appropriate technique for copyright protection and security of multimedia data. Watermarking has been recommended for the civilizing security of biometric systems. Conversely Iris recognition is regard as the most trustworthy and exact biometric identification system accessible. This paper extracted iris feature using a novel grid based approach a Canny Edge Detection (CED) algorithm. The grid based watermarking algorithm uses a hybrid Singular Value Decomposition (SVD) and Discrete Wavelet Transform (DWT) and method. This method provides a scalable, secure, strong and undetectable form of watermarking. This proposed scheme minimizing fault acceptance rate and the fault error rate in very efficient manner.
{"title":"A grid based iris biometric watermarking using wavelet transform","authors":"E. B. Lydia, C. Duraipandi, Anju Pratap, Uthariaraj V. Rhymend","doi":"10.1109/ICRTIT.2014.6996169","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996169","url":null,"abstract":"Watermarking is the most smart and appropriate technique for copyright protection and security of multimedia data. Watermarking has been recommended for the civilizing security of biometric systems. Conversely Iris recognition is regard as the most trustworthy and exact biometric identification system accessible. This paper extracted iris feature using a novel grid based approach a Canny Edge Detection (CED) algorithm. The grid based watermarking algorithm uses a hybrid Singular Value Decomposition (SVD) and Discrete Wavelet Transform (DWT) and method. This method provides a scalable, secure, strong and undetectable form of watermarking. This proposed scheme minimizing fault acceptance rate and the fault error rate in very efficient manner.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130994077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996089
M. Poornima, R. Sumathi, M. Suganthi
Cognitive radio technology enables secondary users (SUs) to opportunistically use the vacant licensed spectrum and significantly improves the utilization of spectrum resource. Traditional architectures for cognitive radio networks (CRNs), such as cognitive cellular networks and cognitive ad hoc networks, impose energy-consuming cognitive radios to SUs' devices for communication and cannot efficiently utilize the spectrum harvested from the primary users (PUs). To enhance the spectrum and energy efficiencies of CRNs, we have designed a new architecture, which is called the Cognitive Capacity Harvesting network (CCH). In CCH, a collection of relay stations (RSs) with cognitive capability are deployed to facilitate the accessing of SUs. In this way, the architecture not only removes the requirement of cognitive radios from SUs and reduces their energy consumption, but also increases frequency reuse and enhances spectrum efficiency. In view of the importance of the RSs on the improvement of spectrum and energy efficiencies, in this paper, we study the RS placement strategy in CCH. We design a framework of heuristic algorithms to compute the near-optimal solutions. The simulation result show that the proposed algorithms outperform the random placement strategy and is always within 2 times of that in the optimal solution.
{"title":"Constraint-based relay station placement in cognitive radio networks","authors":"M. Poornima, R. Sumathi, M. Suganthi","doi":"10.1109/ICRTIT.2014.6996089","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996089","url":null,"abstract":"Cognitive radio technology enables secondary users (SUs) to opportunistically use the vacant licensed spectrum and significantly improves the utilization of spectrum resource. Traditional architectures for cognitive radio networks (CRNs), such as cognitive cellular networks and cognitive ad hoc networks, impose energy-consuming cognitive radios to SUs' devices for communication and cannot efficiently utilize the spectrum harvested from the primary users (PUs). To enhance the spectrum and energy efficiencies of CRNs, we have designed a new architecture, which is called the Cognitive Capacity Harvesting network (CCH). In CCH, a collection of relay stations (RSs) with cognitive capability are deployed to facilitate the accessing of SUs. In this way, the architecture not only removes the requirement of cognitive radios from SUs and reduces their energy consumption, but also increases frequency reuse and enhances spectrum efficiency. In view of the importance of the RSs on the improvement of spectrum and energy efficiencies, in this paper, we study the RS placement strategy in CCH. We design a framework of heuristic algorithms to compute the near-optimal solutions. The simulation result show that the proposed algorithms outperform the random placement strategy and is always within 2 times of that in the optimal solution.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122060698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996160
M. Parvathy, R. Ramya, K. Sundarakantham, S. Shalinie
Recommender system plays a significant role in reducing the information overload on the sites where users have searched and contented. Existing approaches which deals with such recommendation system apply collaborative filtering techniques to specify the most alike users whom they hope to make recommendations. Collaborative Filtering will significantly show better improvement with the enclosure of real data extraction from the suitable tagging system. In this paper, data from social tagging systems are extracted for every individual considering the correlations between users, items, and tag information. Tag information from users is the most decisive factor to predict the personalized suggestion for web users. Here, we rank the available content based tag information with the inclusion of temporal decay of users' behavior over time and the centrality of every node in the network. Finally, we use the common preference metric for effective personalization. Results have been experimentally demonstrated with the empirical dataset MovieLens and provided the results as an alternative recommendation method with simplicity and efficiency.
{"title":"Recommendation system with collaborative social tagging exploration","authors":"M. Parvathy, R. Ramya, K. Sundarakantham, S. Shalinie","doi":"10.1109/ICRTIT.2014.6996160","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996160","url":null,"abstract":"Recommender system plays a significant role in reducing the information overload on the sites where users have searched and contented. Existing approaches which deals with such recommendation system apply collaborative filtering techniques to specify the most alike users whom they hope to make recommendations. Collaborative Filtering will significantly show better improvement with the enclosure of real data extraction from the suitable tagging system. In this paper, data from social tagging systems are extracted for every individual considering the correlations between users, items, and tag information. Tag information from users is the most decisive factor to predict the personalized suggestion for web users. Here, we rank the available content based tag information with the inclusion of temporal decay of users' behavior over time and the centrality of every node in the network. Finally, we use the common preference metric for effective personalization. Results have been experimentally demonstrated with the empirical dataset MovieLens and provided the results as an alternative recommendation method with simplicity and efficiency.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117148319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996173
R. Pratheeka, L. Sharmila
Documents are becoming more informative and in emerging world, sensitive information from documents which is shared to any community over electronic network has to be periodically redacted to preserve highly confidential and sensitive information from the access of open users who have access to the documents. However, accurate data hiding is difficult, it is often time consuming process to black out all the sensitive term. Sanitization techniques can be applied to all heterogeneous textual data to manage it comprehensively and precisely. Automated report generation based on templates is made possible by template based execution. To realize efficiency, Sanitization is performed on documents at two levels 1) while at the time of storing it. 2) while delivering to the user to ensure effective utilization of memory space. This system also incorporates doctor - patient mailing features, health calculators, medicine suggestions, doctor portfolios, reports and a scheduler which removes un-necessary data (documents) on timely basis, which takes this Medical Support System approach to design a practical / procedural algorithm for data sanitization.
{"title":"Sanitization of medical documents using medical support system","authors":"R. Pratheeka, L. Sharmila","doi":"10.1109/ICRTIT.2014.6996173","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996173","url":null,"abstract":"Documents are becoming more informative and in emerging world, sensitive information from documents which is shared to any community over electronic network has to be periodically redacted to preserve highly confidential and sensitive information from the access of open users who have access to the documents. However, accurate data hiding is difficult, it is often time consuming process to black out all the sensitive term. Sanitization techniques can be applied to all heterogeneous textual data to manage it comprehensively and precisely. Automated report generation based on templates is made possible by template based execution. To realize efficiency, Sanitization is performed on documents at two levels 1) while at the time of storing it. 2) while delivering to the user to ensure effective utilization of memory space. This system also incorporates doctor - patient mailing features, health calculators, medicine suggestions, doctor portfolios, reports and a scheduler which removes un-necessary data (documents) on timely basis, which takes this Medical Support System approach to design a practical / procedural algorithm for data sanitization.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"197 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121517216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-04-10DOI: 10.1109/ICRTIT.2014.6996142
S. Santhosh, N. Abinaya, G. Rashmi, V. Sowmya, K. Soman
Data acquired from remote sensing satellites are processed in order to retrieve the information from an image. Those images are preprocessed using image processing techniques such as noise removal. Satellite images are assumed to be corrupted with white Gaussian noise of zero mean and constant variance. Three planes of the noisy image are denoised separately through Legendre Fenchel Transformation. Later, these three planes are concatenated and compared with results obtained by Euler-Lagrange ROF model. Simulation results show that Legendre Fenchel ROF is highly convergent and less time consuming. To add evidence to the outcomes, quality metrics such as variance and PSNR for noisy and denoised images are calculated. The qualitative analysis of an image is analysed using MSSIM calculations, which clarifies the Structural Similarity between denoised images with original image.
{"title":"A novel approach for denoising coloured remote sensing image using Legendre Fenchel Transformation","authors":"S. Santhosh, N. Abinaya, G. Rashmi, V. Sowmya, K. Soman","doi":"10.1109/ICRTIT.2014.6996142","DOIUrl":"https://doi.org/10.1109/ICRTIT.2014.6996142","url":null,"abstract":"Data acquired from remote sensing satellites are processed in order to retrieve the information from an image. Those images are preprocessed using image processing techniques such as noise removal. Satellite images are assumed to be corrupted with white Gaussian noise of zero mean and constant variance. Three planes of the noisy image are denoised separately through Legendre Fenchel Transformation. Later, these three planes are concatenated and compared with results obtained by Euler-Lagrange ROF model. Simulation results show that Legendre Fenchel ROF is highly convergent and less time consuming. To add evidence to the outcomes, quality metrics such as variance and PSNR for noisy and denoised images are calculated. The qualitative analysis of an image is analysed using MSSIM calculations, which clarifies the Structural Similarity between denoised images with original image.","PeriodicalId":422275,"journal":{"name":"2014 International Conference on Recent Trends in Information Technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128984080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}