Pub Date : 2018-02-01DOI: 10.1109/NCC.2018.8600038
P. Sharma, Kamal Agrawal, P. Garg
This paper investigates the error rate performance of a multihop decode and forward (DF) relaying systems, in which all the relays are considered to operate in full-duplex mode. The effect of residual self-loop interference (RSLI) is characterized as a fading effect and modeled with the generalized Nakagami-m distribution. We also consider the effect of the random and fixed phase errors on the performance of gray coded M -ary phase shift keying (MPSK). The random phase errors, which are assumed to be caused by imperfections in the phase locked loop (PLL), are characterized by von-Mises distribution. All the analytical results presented in this paper are supported by Monte Carlo simulations.
{"title":"Multihop FD Relaying with Fixed and Random Phase Errors","authors":"P. Sharma, Kamal Agrawal, P. Garg","doi":"10.1109/NCC.2018.8600038","DOIUrl":"https://doi.org/10.1109/NCC.2018.8600038","url":null,"abstract":"This paper investigates the error rate performance of a multihop decode and forward (DF) relaying systems, in which all the relays are considered to operate in full-duplex mode. The effect of residual self-loop interference (RSLI) is characterized as a fading effect and modeled with the generalized Nakagami-m distribution. We also consider the effect of the random and fixed phase errors on the performance of gray coded M -ary phase shift keying (MPSK). The random phase errors, which are assumed to be caused by imperfections in the phase locked loop (PLL), are characterized by von-Mises distribution. All the analytical results presented in this paper are supported by Monte Carlo simulations.","PeriodicalId":121544,"journal":{"name":"2018 Twenty Fourth National Conference on Communications (NCC)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125779855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/NCC.2018.8600195
C. Kumar, K. Rajawat
Indoor localization is often challenging due to the non-availability of GPS signals. Recently, various radio frequency fingerprinting techniques have been proposed to identify indoor locations using simply received signal strength (RSS) measurements. In general however, RSS measurements are time-varying and are difficult to model for complex environments. This paper proposes the use of dictionary learning (DL) to generate high quality fingerprints that depend also on the channel characteristics for each location. An enhanced DL algorithm is proposed that utilizes prior information about the channel distribution, and can generate the fingerprints in an online fashion. Simulation results demonstrate the efficacy of the proposed approach.
{"title":"Dictionary learning based fingerprinting for indoor localization","authors":"C. Kumar, K. Rajawat","doi":"10.1109/NCC.2018.8600195","DOIUrl":"https://doi.org/10.1109/NCC.2018.8600195","url":null,"abstract":"Indoor localization is often challenging due to the non-availability of GPS signals. Recently, various radio frequency fingerprinting techniques have been proposed to identify indoor locations using simply received signal strength (RSS) measurements. In general however, RSS measurements are time-varying and are difficult to model for complex environments. This paper proposes the use of dictionary learning (DL) to generate high quality fingerprints that depend also on the channel characteristics for each location. An enhanced DL algorithm is proposed that utilizes prior information about the channel distribution, and can generate the fingerprints in an online fashion. Simulation results demonstrate the efficacy of the proposed approach.","PeriodicalId":121544,"journal":{"name":"2018 Twenty Fourth National Conference on Communications (NCC)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127646440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/NCC.2018.8599955
K. Subramani, Srivatsan Sridhar, Rohit Ma, P. Rao
Onset detection refers to the estimation of the timing of events in a music signal. It is an important sub-task in music information retrieval and forms the basis of high-level tasks such as beat tracking and tempo estimation. Typically, the onsets of new events in the audio such as melodic notes and percussive strikes are marked by short-time energy rises and changes in spectral distribution. However, each musical instrument is characterized by its own peculiarities and challenges. In this work, we consider the accurate detection of onsets in piano music. An annotated dataset is presented. The operations in a typical onset detection system are considered and modified based on specific observations on the piano music data. In particular, the use of energy-based weighting of multi-band onset detection functions and the use of a new criterion for adapting the final peak-picking threshold are shown to improve the detection of soft onsets in the vicinity of loud notes. We further present a grouping algorithm which reduces spurious onset detections.
{"title":"Energy-Weighted Multi-Band Novelty Functions for Onset Detection in Piano Music","authors":"K. Subramani, Srivatsan Sridhar, Rohit Ma, P. Rao","doi":"10.1109/NCC.2018.8599955","DOIUrl":"https://doi.org/10.1109/NCC.2018.8599955","url":null,"abstract":"Onset detection refers to the estimation of the timing of events in a music signal. It is an important sub-task in music information retrieval and forms the basis of high-level tasks such as beat tracking and tempo estimation. Typically, the onsets of new events in the audio such as melodic notes and percussive strikes are marked by short-time energy rises and changes in spectral distribution. However, each musical instrument is characterized by its own peculiarities and challenges. In this work, we consider the accurate detection of onsets in piano music. An annotated dataset is presented. The operations in a typical onset detection system are considered and modified based on specific observations on the piano music data. In particular, the use of energy-based weighting of multi-band onset detection functions and the use of a new criterion for adapting the final peak-picking threshold are shown to improve the detection of soft onsets in the vicinity of loud notes. We further present a grouping algorithm which reduces spurious onset detections.","PeriodicalId":121544,"journal":{"name":"2018 Twenty Fourth National Conference on Communications (NCC)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126876163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/NCC.2018.8600011
Dheeraj Kumar Chittam, R. Bansal, R. Srivastava
This paper focuses on universal compression of a piecewise stationary source using sequential change detection algorithms. The change detection algorithms that we have considered assume minimal knowledge of the source and make use of universal estimators of entropy. Here, data in each segment is characterized either by an I.I.D. random process or a first order Markov process. Simulation study of a modified sequential change detection test proposed by Jacob and Bansal [1] is carried out. Next, an algorithm to effectively compress a piece-wise stationary sequence using such change detection algorithms is proposed. Overall compression efficiency achieved with Page's Cumulative Sum (CUSUM) test and the modified change detection test proposed in [1] (JB-Page test) as part of the change detection schemes, are compared. Further, when JB-Page test is used for change detection, four different compression algorithms, namely, Lempel Ziv Welch (LZW), Lempel Ziv (LZ78), Burrows Wheeler Transform (BWT) and Context Tree Weighting (CTW) algorithms are compared based on their impact on overall compression.
{"title":"Universal Compression of a Piecewise Stationary Source Through Sequential Change Detection","authors":"Dheeraj Kumar Chittam, R. Bansal, R. Srivastava","doi":"10.1109/NCC.2018.8600011","DOIUrl":"https://doi.org/10.1109/NCC.2018.8600011","url":null,"abstract":"This paper focuses on universal compression of a piecewise stationary source using sequential change detection algorithms. The change detection algorithms that we have considered assume minimal knowledge of the source and make use of universal estimators of entropy. Here, data in each segment is characterized either by an I.I.D. random process or a first order Markov process. Simulation study of a modified sequential change detection test proposed by Jacob and Bansal [1] is carried out. Next, an algorithm to effectively compress a piece-wise stationary sequence using such change detection algorithms is proposed. Overall compression efficiency achieved with Page's Cumulative Sum (CUSUM) test and the modified change detection test proposed in [1] (JB-Page test) as part of the change detection schemes, are compared. Further, when JB-Page test is used for change detection, four different compression algorithms, namely, Lempel Ziv Welch (LZW), Lempel Ziv (LZ78), Burrows Wheeler Transform (BWT) and Context Tree Weighting (CTW) algorithms are compared based on their impact on overall compression.","PeriodicalId":121544,"journal":{"name":"2018 Twenty Fourth National Conference on Communications (NCC)","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114680738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/NCC.2018.8600185
R. Renu, V. Sowmya, K. Soman
Hyperspectral images are large cubes of data which are commonly processed band-wise as two-dimensional image patches. This 2D processing might lead to loose the spectral efficiency contained in the image. Introducing Hyperspectral image as third-order tensors helps to preserve the spectral and spatial efficiency of the image. Multilinear Singular Value Decomposition (MLSVD) is an extension of Singular Value Decomposition (SVD) to 3D which can be used for compressing the image spatially and spectrally. The efficiency of compression is verified by reconstructing the image using Low Multilinear Rank Approximation (LMLRA). The proposed method has been validated with Signal to Noise Ratio (SNR), pixel reflectance spectrum and pixel-wise classification of the reconstructed image.
{"title":"Spatio-Spectral Compression and Analysis of Hyperspectral Images using Tensor Decomposition","authors":"R. Renu, V. Sowmya, K. Soman","doi":"10.1109/NCC.2018.8600185","DOIUrl":"https://doi.org/10.1109/NCC.2018.8600185","url":null,"abstract":"Hyperspectral images are large cubes of data which are commonly processed band-wise as two-dimensional image patches. This 2D processing might lead to loose the spectral efficiency contained in the image. Introducing Hyperspectral image as third-order tensors helps to preserve the spectral and spatial efficiency of the image. Multilinear Singular Value Decomposition (MLSVD) is an extension of Singular Value Decomposition (SVD) to 3D which can be used for compressing the image spatially and spectrally. The efficiency of compression is verified by reconstructing the image using Low Multilinear Rank Approximation (LMLRA). The proposed method has been validated with Signal to Noise Ratio (SNR), pixel reflectance spectrum and pixel-wise classification of the reconstructed image.","PeriodicalId":121544,"journal":{"name":"2018 Twenty Fourth National Conference on Communications (NCC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130861136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/NCC.2018.8600080
Pranab Samanta, Akanksha Pathak, K. Mandana, G. Saha
Coronary artery disease (CAD) is one of the leading causes of mortality and morbidity globally. Nowadays, it is spreading at an alarming rate. Recently, there is an increasing interest to develop simple and non-invasive automated methods for reliable diagnosis of CAD. Studies reported that the use of single-channel phonocardiogram (PCG) signal for detecting weak CAD murmurs caused by the stenosed coronary arteries due to turbulent blood flow. In this work, we introduce a new framework with multi-channel data acquisition system to classify CAD and normal subjects. The proposed method does not require any reference signal such as an electrocardiogram (ECG) signal for PCG signal segmentation as reported in the earlier studies. Subsequently, the study has used five different features, such as spectral moments, spectral entropy, moments of PSD function, autoregressive (AR) parameters, and instantaneous frequency derived from frequency representations of PCG signals. These features have captured the specific details related to the disease. We use an artificial neural network (ANN) for the classification task. Experimental results show that the AR features well-performed. We achieve an accuracy of 74.24% by using multi-channel recorded data where as the best performance obtained using single-channel signal is 69.69%.
{"title":"Identification of Coronary Artery Diseased Subjects Using Spectral Featuries","authors":"Pranab Samanta, Akanksha Pathak, K. Mandana, G. Saha","doi":"10.1109/NCC.2018.8600080","DOIUrl":"https://doi.org/10.1109/NCC.2018.8600080","url":null,"abstract":"Coronary artery disease (CAD) is one of the leading causes of mortality and morbidity globally. Nowadays, it is spreading at an alarming rate. Recently, there is an increasing interest to develop simple and non-invasive automated methods for reliable diagnosis of CAD. Studies reported that the use of single-channel phonocardiogram (PCG) signal for detecting weak CAD murmurs caused by the stenosed coronary arteries due to turbulent blood flow. In this work, we introduce a new framework with multi-channel data acquisition system to classify CAD and normal subjects. The proposed method does not require any reference signal such as an electrocardiogram (ECG) signal for PCG signal segmentation as reported in the earlier studies. Subsequently, the study has used five different features, such as spectral moments, spectral entropy, moments of PSD function, autoregressive (AR) parameters, and instantaneous frequency derived from frequency representations of PCG signals. These features have captured the specific details related to the disease. We use an artificial neural network (ANN) for the classification task. Experimental results show that the AR features well-performed. We achieve an accuracy of 74.24% by using multi-channel recorded data where as the best performance obtained using single-channel signal is 69.69%.","PeriodicalId":121544,"journal":{"name":"2018 Twenty Fourth National Conference on Communications (NCC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132990782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/NCC.2018.8600088
P. S. Baiju, P. Deepak Jayan, Sudhish N George
Often, images captured by digital camera in outdoor vision system may be significantly distorted by bad weather conditions. Such visual distortions may negatively affect the performance of the system. One such bad weather condition is rain, which randomly makes intensity fluctuations in the images. This paper proposes a new low rank recovery based algorithm to remove the rain streaks from single image taken in rainy weather. This method makes the use of weighted nuclear norm (WNN) and total variation (TV) regularization for efficient rain removal. WNN assigns different weights to different singular values based on the details each singular value holds. TV regularization is used to discriminate most of natural image content from sparse rain streaks by preserving piecewise smoothness of images. Simulation result shows that the rain streaks are more effcaciously eliminated by our method.
{"title":"Weighted Nuclear Norm and TV Regularization based Image Deraining","authors":"P. S. Baiju, P. Deepak Jayan, Sudhish N George","doi":"10.1109/NCC.2018.8600088","DOIUrl":"https://doi.org/10.1109/NCC.2018.8600088","url":null,"abstract":"Often, images captured by digital camera in outdoor vision system may be significantly distorted by bad weather conditions. Such visual distortions may negatively affect the performance of the system. One such bad weather condition is rain, which randomly makes intensity fluctuations in the images. This paper proposes a new low rank recovery based algorithm to remove the rain streaks from single image taken in rainy weather. This method makes the use of weighted nuclear norm (WNN) and total variation (TV) regularization for efficient rain removal. WNN assigns different weights to different singular values based on the details each singular value holds. TV regularization is used to discriminate most of natural image content from sparse rain streaks by preserving piecewise smoothness of images. Simulation result shows that the rain streaks are more effcaciously eliminated by our method.","PeriodicalId":121544,"journal":{"name":"2018 Twenty Fourth National Conference on Communications (NCC)","volume":"248 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133818359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote isolated applications such as agricultural monitoring, corrosion monitoring, condition monitoring etc., needs static infrastructure to be deployed and is spread across a large geographical area to transfer the data from source to gateway. These scenarios consume more power of battery powered devices and need huge infrastructure and maintenance cost. Use of mobile sinks is one good solution for such application to reduce power consumption, infrastructure and maintenance costs. However, the mobile sinks based architectures presented in the literature are not energy efficient and possess with more packet loss. To address this issue, this paper proposes a network architecture that uses an embedded database in selected cluster head nodes and mobile sink along with preconfigured route for the mobile sink. This improves network lifetime, packet reception rate and reduces power consumption of mobile sink. The proposed architecture is practically implemented and verified with IITH mote.
{"title":"Improved Energy Efficient Architecture for Wireless Sensor Networks with Mobile Sinks","authors":"Prashanth Lingala, Rajalakshmi Pachamuthu, Soumil Heble","doi":"10.1109/NCC.2018.8599944","DOIUrl":"https://doi.org/10.1109/NCC.2018.8599944","url":null,"abstract":"Remote isolated applications such as agricultural monitoring, corrosion monitoring, condition monitoring etc., needs static infrastructure to be deployed and is spread across a large geographical area to transfer the data from source to gateway. These scenarios consume more power of battery powered devices and need huge infrastructure and maintenance cost. Use of mobile sinks is one good solution for such application to reduce power consumption, infrastructure and maintenance costs. However, the mobile sinks based architectures presented in the literature are not energy efficient and possess with more packet loss. To address this issue, this paper proposes a network architecture that uses an embedded database in selected cluster head nodes and mobile sink along with preconfigured route for the mobile sink. This improves network lifetime, packet reception rate and reduces power consumption of mobile sink. The proposed architecture is practically implemented and verified with IITH mote.","PeriodicalId":121544,"journal":{"name":"2018 Twenty Fourth National Conference on Communications (NCC)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133372287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/NCC.2018.8600192
Abhijit Mondal, Aniruddh Rao Kabbinale, S. Shailendra, H. Rath, Arpan Pal
Multipath TCP (MPTCP) can exploit multiple heterogeneous interfaces available at the end devices by establishing concurrent multiple connections. MPTCP is a drop-in replacement for TCP. This makes MPTCP an attractive choice for various applications. In recent times, MPTCP is finding its way into constrained devices such as robots and Unmanned Aerial Vehicles (UAVs). For these devices, it is critical to provide better Quality of Service (QoS) to control data than user data. In this paper, we have presented Primary Path only Scheduler (PPos), a novel sub-flow scheduler, for constraint devices such as UAV s and robots where it is efficient to segregate data on different links based upon type of data or QoS requirements to improve reliability and error resilience. We have proposed a new MPTCP kernel data-structures and algorithm to make the sub-flow priority persistent across sub-flow failures. We have introduced several new socket APIs to control the sub-flow properties of MPTCP at the application layer and also for providing the fine grained control over the behaviour of PPoS. These APIs can help modifying the MPTCP behaviour for each socket/application individually rather than changing the behaviour system wide. The proposed scheduler and the socket APIs are extensively evaluated in an Mininet based emulation environment. We have also integrated PPoS and Socket APIs with Robot Operating System (ROS) and measured their performances on Raspberry-Pi based testbed as well.
MPTCP (Multipath TCP)通过建立并发的多个连接,利用终端设备上可用的多个异构接口。MPTCP是TCP的直接替代品。这使得MPTCP成为各种应用程序的一个有吸引力的选择。近年来,MPTCP正在进入机器人和无人机等受限设备。对于这些设备,提供比用户数据更好的服务质量(QoS)来控制数据是至关重要的。在本文中,我们提出了一种新型的子流调度程序PPos (Primary Path only Scheduler),用于无人机和机器人等约束设备,它可以根据数据类型或QoS要求有效地隔离不同链路上的数据,以提高可靠性和容错性。我们提出了一种新的MPTCP内核数据结构和算法,使子流优先级在子流故障中保持持久。我们引入了几个新的套接字api来控制应用层的MPTCP子流属性,并提供对ppo行为的细粒度控制。这些api可以帮助修改每个套接字/应用程序的MPTCP行为,而不是改变整个系统的行为。建议的调度器和套接字api在基于Mininet的仿真环境中进行了广泛的评估。我们还将PPoS和Socket api与机器人操作系统(ROS)集成在一起,并在基于树莓派的测试平台上测试了它们的性能。
{"title":"PPoS: A Novel Sub-flow Scheduler and Socket APIs for Multipath TCP (MPTCP)","authors":"Abhijit Mondal, Aniruddh Rao Kabbinale, S. Shailendra, H. Rath, Arpan Pal","doi":"10.1109/NCC.2018.8600192","DOIUrl":"https://doi.org/10.1109/NCC.2018.8600192","url":null,"abstract":"Multipath TCP (MPTCP) can exploit multiple heterogeneous interfaces available at the end devices by establishing concurrent multiple connections. MPTCP is a drop-in replacement for TCP. This makes MPTCP an attractive choice for various applications. In recent times, MPTCP is finding its way into constrained devices such as robots and Unmanned Aerial Vehicles (UAVs). For these devices, it is critical to provide better Quality of Service (QoS) to control data than user data. In this paper, we have presented Primary Path only Scheduler (PPos), a novel sub-flow scheduler, for constraint devices such as UAV s and robots where it is efficient to segregate data on different links based upon type of data or QoS requirements to improve reliability and error resilience. We have proposed a new MPTCP kernel data-structures and algorithm to make the sub-flow priority persistent across sub-flow failures. We have introduced several new socket APIs to control the sub-flow properties of MPTCP at the application layer and also for providing the fine grained control over the behaviour of PPoS. These APIs can help modifying the MPTCP behaviour for each socket/application individually rather than changing the behaviour system wide. The proposed scheduler and the socket APIs are extensively evaluated in an Mininet based emulation environment. We have also integrated PPoS and Socket APIs with Robot Operating System (ROS) and measured their performances on Raspberry-Pi based testbed as well.","PeriodicalId":121544,"journal":{"name":"2018 Twenty Fourth National Conference on Communications (NCC)","volume":"364 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130959086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-02-01DOI: 10.1109/NCC.2018.8600174
B. Ghanekar, D. Narayan, U. Khankhoje
We propose an irrotationality-preserving total variation algorithm to solve the two dimensional (2D) phase unwrapping problem, which occurs in Interferometric Synthetic Aperture Radar (InSAR) imaging and other problems. Total variation methods aim at denoising the phase derivatives to reconstruct the absolute phase. We supplement these methods by adding an additional constraint driving the curl of the gradient of the 2D phase map to zero, i.e. imposing the irrotationality of the gradient map by suitably constructing a cost function which we then minimize. We test our method and compare with existing methods on several synthetic surfaces specific to the problem of InSAR imaging for different noise levels. We report better estimates of unwrapped phase maps for the terrains simulated and for all noise levels with a two-fold improvement in terms of root mean square (RMS) error in high noise level scenarios.
{"title":"An irrotationality preserving total variation algorithm for phase unwrapping","authors":"B. Ghanekar, D. Narayan, U. Khankhoje","doi":"10.1109/NCC.2018.8600174","DOIUrl":"https://doi.org/10.1109/NCC.2018.8600174","url":null,"abstract":"We propose an irrotationality-preserving total variation algorithm to solve the two dimensional (2D) phase unwrapping problem, which occurs in Interferometric Synthetic Aperture Radar (InSAR) imaging and other problems. Total variation methods aim at denoising the phase derivatives to reconstruct the absolute phase. We supplement these methods by adding an additional constraint driving the curl of the gradient of the 2D phase map to zero, i.e. imposing the irrotationality of the gradient map by suitably constructing a cost function which we then minimize. We test our method and compare with existing methods on several synthetic surfaces specific to the problem of InSAR imaging for different noise levels. We report better estimates of unwrapped phase maps for the terrains simulated and for all noise levels with a two-fold improvement in terms of root mean square (RMS) error in high noise level scenarios.","PeriodicalId":121544,"journal":{"name":"2018 Twenty Fourth National Conference on Communications (NCC)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134237958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}