K. Harrity, Soundararajan Ezekiel, A. Bubalo, Erik Blasch, M. Alford
{"title":"基于双密度双树小波的偏振分析","authors":"K. Harrity, Soundararajan Ezekiel, A. Bubalo, Erik Blasch, M. Alford","doi":"10.1109/NAECON.2014.7045789","DOIUrl":null,"url":null,"abstract":"For the past two decades, the Discrete Wavelet Transformation (DWT) has been successfully applied to many fields. For image processing applications, the DWT can produce non-redundant representations of an input image with greater performance than other wavelet methods. Further, the DWT provides a better spatial and spectral localization of image representation, capable of revealing smaller changes, trends, and breakdown points that classical methods often miss. However, the DWT has its own limitations and disadvantages such as lack of shift invariance. That is, if the input signal or image is shifted, then the wavelet coefficients will exacerbate that shift. The DWT also lacks the ability to represent directional cases. The Double Density Dual-Tree Discrete Wavelet Transformation (D3TDWT) is a relatively new and enhanced version of the DWT with two scaling functions and four distinct wavelets designed in such a way that one pair of wavelets is offset with another pair so that the first pair lies in between the second. In this paper, we propose a D3TDWT polarimetry analysis method to analyze Long Wave Infrared (LWIR) polarimetry imagery to discriminate objects such as people and vehicles from background clutter. The D3TDWT method can be applied to a wide range of applications such as change detection, shape extraction, target recognition, and simultaneous tracking and identification.","PeriodicalId":318539,"journal":{"name":"NAECON 2014 - IEEE National Aerospace and Electronics Conference","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Double-density dual-tree wavelet-based polarimetry analysis\",\"authors\":\"K. Harrity, Soundararajan Ezekiel, A. Bubalo, Erik Blasch, M. Alford\",\"doi\":\"10.1109/NAECON.2014.7045789\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"For the past two decades, the Discrete Wavelet Transformation (DWT) has been successfully applied to many fields. For image processing applications, the DWT can produce non-redundant representations of an input image with greater performance than other wavelet methods. Further, the DWT provides a better spatial and spectral localization of image representation, capable of revealing smaller changes, trends, and breakdown points that classical methods often miss. However, the DWT has its own limitations and disadvantages such as lack of shift invariance. That is, if the input signal or image is shifted, then the wavelet coefficients will exacerbate that shift. The DWT also lacks the ability to represent directional cases. The Double Density Dual-Tree Discrete Wavelet Transformation (D3TDWT) is a relatively new and enhanced version of the DWT with two scaling functions and four distinct wavelets designed in such a way that one pair of wavelets is offset with another pair so that the first pair lies in between the second. In this paper, we propose a D3TDWT polarimetry analysis method to analyze Long Wave Infrared (LWIR) polarimetry imagery to discriminate objects such as people and vehicles from background clutter. The D3TDWT method can be applied to a wide range of applications such as change detection, shape extraction, target recognition, and simultaneous tracking and identification.\",\"PeriodicalId\":318539,\"journal\":{\"name\":\"NAECON 2014 - IEEE National Aerospace and Electronics Conference\",\"volume\":\"61 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-06-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"NAECON 2014 - IEEE National Aerospace and Electronics Conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NAECON.2014.7045789\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"NAECON 2014 - IEEE National Aerospace and Electronics Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NAECON.2014.7045789","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
For the past two decades, the Discrete Wavelet Transformation (DWT) has been successfully applied to many fields. For image processing applications, the DWT can produce non-redundant representations of an input image with greater performance than other wavelet methods. Further, the DWT provides a better spatial and spectral localization of image representation, capable of revealing smaller changes, trends, and breakdown points that classical methods often miss. However, the DWT has its own limitations and disadvantages such as lack of shift invariance. That is, if the input signal or image is shifted, then the wavelet coefficients will exacerbate that shift. The DWT also lacks the ability to represent directional cases. The Double Density Dual-Tree Discrete Wavelet Transformation (D3TDWT) is a relatively new and enhanced version of the DWT with two scaling functions and four distinct wavelets designed in such a way that one pair of wavelets is offset with another pair so that the first pair lies in between the second. In this paper, we propose a D3TDWT polarimetry analysis method to analyze Long Wave Infrared (LWIR) polarimetry imagery to discriminate objects such as people and vehicles from background clutter. The D3TDWT method can be applied to a wide range of applications such as change detection, shape extraction, target recognition, and simultaneous tracking and identification.